Test Report: Docker_Linux_crio_arm64 22047

                    
                      4655c6aa5049635fb4cb98fc0f74f66a1c57dbdb:2025-12-06:42658
                    
                

Test fail (40/316)

Order failed test Duration
38 TestAddons/serial/Volcano 0.37
44 TestAddons/parallel/Registry 16.01
45 TestAddons/parallel/RegistryCreds 0.53
46 TestAddons/parallel/Ingress 143.66
47 TestAddons/parallel/InspektorGadget 6.3
48 TestAddons/parallel/MetricsServer 6.52
50 TestAddons/parallel/CSI 41.56
51 TestAddons/parallel/Headlamp 3.3
52 TestAddons/parallel/CloudSpanner 5.27
53 TestAddons/parallel/LocalPath 9.45
54 TestAddons/parallel/NvidiaDevicePlugin 6.28
55 TestAddons/parallel/Yakd 5.3
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 501.74
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 369.07
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 2.49
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 2.51
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 2.49
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 735.25
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 2.32
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 0.07
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 1.78
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 3.23
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 2.42
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 241.7
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 3.13
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 0.08
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.36
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.36
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.37
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.32
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.54
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.44
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 0.1
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 93
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 2.36
293 TestJSONOutput/pause/Command 1.76
299 TestJSONOutput/unpause/Command 1.84
358 TestKubernetesUpgrade 793.87
384 TestPause/serial/Pause 6.83
479 TestNetworkPlugins/group/kindnet/NetCatPod 7200.126
x
+
TestAddons/serial/Volcano (0.37s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-545880 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-545880 addons disable volcano --alsologtostderr -v=1: exit status 11 (364.705465ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:28:41.021984  371737 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:28:41.027877  371737 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:28:41.027901  371737 out.go:374] Setting ErrFile to fd 2...
	I1206 10:28:41.027908  371737 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:28:41.028208  371737 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 10:28:41.028534  371737 mustload.go:66] Loading cluster: addons-545880
	I1206 10:28:41.028925  371737 config.go:182] Loaded profile config "addons-545880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:28:41.028938  371737 addons.go:622] checking whether the cluster is paused
	I1206 10:28:41.029046  371737 config.go:182] Loaded profile config "addons-545880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:28:41.029056  371737 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:28:41.029603  371737 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:28:41.070754  371737 ssh_runner.go:195] Run: systemctl --version
	I1206 10:28:41.070817  371737 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:28:41.102487  371737 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:28:41.230471  371737 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:28:41.230695  371737 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:28:41.267021  371737 cri.go:89] found id: "76e109752916eb227cc2778fc40189f2225fe99abbb5caa1dc492604fa63b088"
	I1206 10:28:41.267094  371737 cri.go:89] found id: "c3f6082a7a0c7c8725c19d46cd708aeb5d4126a349db5fe93809b3ef79169052"
	I1206 10:28:41.267122  371737 cri.go:89] found id: "d59b55d8c46bd316322b59f76bbed7bf1ba7ae09f22a8d7446896bb650747b97"
	I1206 10:28:41.267141  371737 cri.go:89] found id: "ee27785571f1406c526ba554d46111adfc871bf6b5094f993b79d922ed4e4e88"
	I1206 10:28:41.267176  371737 cri.go:89] found id: "b58456cd2cfa54ef5616f519f55a6b7b272d08f96ca019bf4d2f47f9dc581de3"
	I1206 10:28:41.267198  371737 cri.go:89] found id: "77b77d1ecb28a6271e776faf9148345a91cf28a8eb40f9adc7343e6d90864f3a"
	I1206 10:28:41.267224  371737 cri.go:89] found id: "0bb771e3965c7313e7a976270ee1cf4f72f901f19cf787e7ef330577f83ca8b0"
	I1206 10:28:41.267255  371737 cri.go:89] found id: "82475061c71650dc2d5ef1c1b6fb59dc1e8d85ff79c3598c514ad231134b1d1a"
	I1206 10:28:41.267279  371737 cri.go:89] found id: "52d954765a231dbdcd394aa043b7231f3b45f20db74ede3718de67caabeea5a3"
	I1206 10:28:41.267300  371737 cri.go:89] found id: "e292596ad2f80045ad3b706145d35d90657c46cc5300b047c28f357a09003684"
	I1206 10:28:41.267332  371737 cri.go:89] found id: "ab9b79c2c68c1be8095a1a81cd7d444d52723042c6629740074d930656007cfd"
	I1206 10:28:41.267352  371737 cri.go:89] found id: "eaaabe40faa63af2c6b5e0ffb01fdbff88ff53227bb4a4b884fca2db86a16b38"
	I1206 10:28:41.267367  371737 cri.go:89] found id: "aea66f37874913be4b5420f3d08acfb0b6388ccfb25c63270ce6741cf675ba44"
	I1206 10:28:41.267440  371737 cri.go:89] found id: "bbd2d73693ff14927141ea51103bb4d99dce673d1531632ca460362ab91bc129"
	I1206 10:28:41.267463  371737 cri.go:89] found id: "778b08b9b628cb82a3c8742868fe4b9a4b0dbad3c250600336afae611d54dcfd"
	I1206 10:28:41.267489  371737 cri.go:89] found id: "358be0ebbc23e420f6fde28e811fd30f1d4064a0e72dfb910c4e719a8d628d3b"
	I1206 10:28:41.267539  371737 cri.go:89] found id: "1c178782b46ae3df28453a2dd88fc57e38eb824abae86db11976cc74cf8b87be"
	I1206 10:28:41.267565  371737 cri.go:89] found id: "4f2a86e87c1bf385e11b164e78ea4f4e9844b0534c9bec2d841dfb406fec8a56"
	I1206 10:28:41.267589  371737 cri.go:89] found id: "410f38934f188529387872c7a0345e42f47f3295f320a1765aa24e1b9a271d4d"
	I1206 10:28:41.267626  371737 cri.go:89] found id: "69ffc3958d44bd262b1360fdb7c52481a97a7e588cd4d05224b3704341139dd0"
	I1206 10:28:41.267650  371737 cri.go:89] found id: "66618904a8d73226678429bf63c1faac7f76d45b9de953c282d294fedfc2cfb6"
	I1206 10:28:41.267655  371737 cri.go:89] found id: "9717574a8255200f8dddcf7a2550e63bdb6b4bb664ec25aeb8635f9277183f01"
	I1206 10:28:41.267658  371737 cri.go:89] found id: "6bfecd83e062db176f5124191f88157b58c2a91ba34d40d6c82c9fbd3c6fee47"
	I1206 10:28:41.267661  371737 cri.go:89] found id: ""
	I1206 10:28:41.267734  371737 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 10:28:41.283594  371737 out.go:203] 
	W1206 10:28:41.286624  371737 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:28:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:28:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 10:28:41.286646  371737 out.go:285] * 
	* 
	W1206 10:28:41.301115  371737 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:28:41.304232  371737 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-545880 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.37s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 34.117663ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-lrjzv" [ccdcbcbc-0689-4862-8dd1-415689504519] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003310737s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-j4zp9" [7cd8d746-23f2-448e-a9a6-8281f58e0d70] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003251532s
addons_test.go:392: (dbg) Run:  kubectl --context addons-545880 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-545880 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-545880 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.44383337s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-545880 ip
2025/12/06 10:29:08 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-545880 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-545880 addons disable registry --alsologtostderr -v=1: exit status 11 (268.550632ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:29:08.463638  372685 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:29:08.464529  372685 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:29:08.464583  372685 out.go:374] Setting ErrFile to fd 2...
	I1206 10:29:08.464605  372685 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:29:08.464903  372685 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 10:29:08.465255  372685 mustload.go:66] Loading cluster: addons-545880
	I1206 10:29:08.465706  372685 config.go:182] Loaded profile config "addons-545880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:29:08.465758  372685 addons.go:622] checking whether the cluster is paused
	I1206 10:29:08.465904  372685 config.go:182] Loaded profile config "addons-545880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:29:08.465944  372685 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:29:08.466495  372685 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:29:08.487013  372685 ssh_runner.go:195] Run: systemctl --version
	I1206 10:29:08.487081  372685 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:29:08.506604  372685 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:29:08.614666  372685 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:29:08.614801  372685 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:29:08.645285  372685 cri.go:89] found id: "76e109752916eb227cc2778fc40189f2225fe99abbb5caa1dc492604fa63b088"
	I1206 10:29:08.645309  372685 cri.go:89] found id: "c3f6082a7a0c7c8725c19d46cd708aeb5d4126a349db5fe93809b3ef79169052"
	I1206 10:29:08.645314  372685 cri.go:89] found id: "d59b55d8c46bd316322b59f76bbed7bf1ba7ae09f22a8d7446896bb650747b97"
	I1206 10:29:08.645318  372685 cri.go:89] found id: "ee27785571f1406c526ba554d46111adfc871bf6b5094f993b79d922ed4e4e88"
	I1206 10:29:08.645321  372685 cri.go:89] found id: "b58456cd2cfa54ef5616f519f55a6b7b272d08f96ca019bf4d2f47f9dc581de3"
	I1206 10:29:08.645325  372685 cri.go:89] found id: "77b77d1ecb28a6271e776faf9148345a91cf28a8eb40f9adc7343e6d90864f3a"
	I1206 10:29:08.645328  372685 cri.go:89] found id: "0bb771e3965c7313e7a976270ee1cf4f72f901f19cf787e7ef330577f83ca8b0"
	I1206 10:29:08.645351  372685 cri.go:89] found id: "82475061c71650dc2d5ef1c1b6fb59dc1e8d85ff79c3598c514ad231134b1d1a"
	I1206 10:29:08.645361  372685 cri.go:89] found id: "52d954765a231dbdcd394aa043b7231f3b45f20db74ede3718de67caabeea5a3"
	I1206 10:29:08.645370  372685 cri.go:89] found id: "e292596ad2f80045ad3b706145d35d90657c46cc5300b047c28f357a09003684"
	I1206 10:29:08.645374  372685 cri.go:89] found id: "ab9b79c2c68c1be8095a1a81cd7d444d52723042c6629740074d930656007cfd"
	I1206 10:29:08.645377  372685 cri.go:89] found id: "eaaabe40faa63af2c6b5e0ffb01fdbff88ff53227bb4a4b884fca2db86a16b38"
	I1206 10:29:08.645381  372685 cri.go:89] found id: "aea66f37874913be4b5420f3d08acfb0b6388ccfb25c63270ce6741cf675ba44"
	I1206 10:29:08.645385  372685 cri.go:89] found id: "bbd2d73693ff14927141ea51103bb4d99dce673d1531632ca460362ab91bc129"
	I1206 10:29:08.645393  372685 cri.go:89] found id: "778b08b9b628cb82a3c8742868fe4b9a4b0dbad3c250600336afae611d54dcfd"
	I1206 10:29:08.645405  372685 cri.go:89] found id: "358be0ebbc23e420f6fde28e811fd30f1d4064a0e72dfb910c4e719a8d628d3b"
	I1206 10:29:08.645413  372685 cri.go:89] found id: "1c178782b46ae3df28453a2dd88fc57e38eb824abae86db11976cc74cf8b87be"
	I1206 10:29:08.645442  372685 cri.go:89] found id: "4f2a86e87c1bf385e11b164e78ea4f4e9844b0534c9bec2d841dfb406fec8a56"
	I1206 10:29:08.645450  372685 cri.go:89] found id: "410f38934f188529387872c7a0345e42f47f3295f320a1765aa24e1b9a271d4d"
	I1206 10:29:08.645454  372685 cri.go:89] found id: "69ffc3958d44bd262b1360fdb7c52481a97a7e588cd4d05224b3704341139dd0"
	I1206 10:29:08.645459  372685 cri.go:89] found id: "66618904a8d73226678429bf63c1faac7f76d45b9de953c282d294fedfc2cfb6"
	I1206 10:29:08.645462  372685 cri.go:89] found id: "9717574a8255200f8dddcf7a2550e63bdb6b4bb664ec25aeb8635f9277183f01"
	I1206 10:29:08.645465  372685 cri.go:89] found id: "6bfecd83e062db176f5124191f88157b58c2a91ba34d40d6c82c9fbd3c6fee47"
	I1206 10:29:08.645469  372685 cri.go:89] found id: ""
	I1206 10:29:08.645537  372685 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 10:29:08.660885  372685 out.go:203] 
	W1206 10:29:08.663851  372685 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:29:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:29:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 10:29:08.663873  372685 out.go:285] * 
	* 
	W1206 10:29:08.669063  372685 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:29:08.672021  372685 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-545880 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (16.01s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.53s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.672041ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-545880
addons_test.go:332: (dbg) Run:  kubectl --context addons-545880 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-545880 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-545880 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (293.838244ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:29:37.817629  373742 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:29:37.818331  373742 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:29:37.818364  373742 out.go:374] Setting ErrFile to fd 2...
	I1206 10:29:37.818385  373742 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:29:37.818678  373742 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 10:29:37.818990  373742 mustload.go:66] Loading cluster: addons-545880
	I1206 10:29:37.819459  373742 config.go:182] Loaded profile config "addons-545880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:29:37.819496  373742 addons.go:622] checking whether the cluster is paused
	I1206 10:29:37.819630  373742 config.go:182] Loaded profile config "addons-545880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:29:37.819657  373742 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:29:37.820181  373742 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:29:37.839049  373742 ssh_runner.go:195] Run: systemctl --version
	I1206 10:29:37.839114  373742 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:29:37.862921  373742 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:29:37.970382  373742 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:29:37.970477  373742 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:29:38.017875  373742 cri.go:89] found id: "76e109752916eb227cc2778fc40189f2225fe99abbb5caa1dc492604fa63b088"
	I1206 10:29:38.017914  373742 cri.go:89] found id: "c3f6082a7a0c7c8725c19d46cd708aeb5d4126a349db5fe93809b3ef79169052"
	I1206 10:29:38.017921  373742 cri.go:89] found id: "d59b55d8c46bd316322b59f76bbed7bf1ba7ae09f22a8d7446896bb650747b97"
	I1206 10:29:38.017926  373742 cri.go:89] found id: "ee27785571f1406c526ba554d46111adfc871bf6b5094f993b79d922ed4e4e88"
	I1206 10:29:38.017929  373742 cri.go:89] found id: "b58456cd2cfa54ef5616f519f55a6b7b272d08f96ca019bf4d2f47f9dc581de3"
	I1206 10:29:38.017933  373742 cri.go:89] found id: "77b77d1ecb28a6271e776faf9148345a91cf28a8eb40f9adc7343e6d90864f3a"
	I1206 10:29:38.017937  373742 cri.go:89] found id: "0bb771e3965c7313e7a976270ee1cf4f72f901f19cf787e7ef330577f83ca8b0"
	I1206 10:29:38.017940  373742 cri.go:89] found id: "82475061c71650dc2d5ef1c1b6fb59dc1e8d85ff79c3598c514ad231134b1d1a"
	I1206 10:29:38.017943  373742 cri.go:89] found id: "52d954765a231dbdcd394aa043b7231f3b45f20db74ede3718de67caabeea5a3"
	I1206 10:29:38.017951  373742 cri.go:89] found id: "e292596ad2f80045ad3b706145d35d90657c46cc5300b047c28f357a09003684"
	I1206 10:29:38.017954  373742 cri.go:89] found id: "ab9b79c2c68c1be8095a1a81cd7d444d52723042c6629740074d930656007cfd"
	I1206 10:29:38.017958  373742 cri.go:89] found id: "eaaabe40faa63af2c6b5e0ffb01fdbff88ff53227bb4a4b884fca2db86a16b38"
	I1206 10:29:38.017961  373742 cri.go:89] found id: "aea66f37874913be4b5420f3d08acfb0b6388ccfb25c63270ce6741cf675ba44"
	I1206 10:29:38.017964  373742 cri.go:89] found id: "bbd2d73693ff14927141ea51103bb4d99dce673d1531632ca460362ab91bc129"
	I1206 10:29:38.017968  373742 cri.go:89] found id: "778b08b9b628cb82a3c8742868fe4b9a4b0dbad3c250600336afae611d54dcfd"
	I1206 10:29:38.017978  373742 cri.go:89] found id: "358be0ebbc23e420f6fde28e811fd30f1d4064a0e72dfb910c4e719a8d628d3b"
	I1206 10:29:38.017984  373742 cri.go:89] found id: "1c178782b46ae3df28453a2dd88fc57e38eb824abae86db11976cc74cf8b87be"
	I1206 10:29:38.017990  373742 cri.go:89] found id: "4f2a86e87c1bf385e11b164e78ea4f4e9844b0534c9bec2d841dfb406fec8a56"
	I1206 10:29:38.017994  373742 cri.go:89] found id: "410f38934f188529387872c7a0345e42f47f3295f320a1765aa24e1b9a271d4d"
	I1206 10:29:38.017997  373742 cri.go:89] found id: "69ffc3958d44bd262b1360fdb7c52481a97a7e588cd4d05224b3704341139dd0"
	I1206 10:29:38.018002  373742 cri.go:89] found id: "66618904a8d73226678429bf63c1faac7f76d45b9de953c282d294fedfc2cfb6"
	I1206 10:29:38.018005  373742 cri.go:89] found id: "9717574a8255200f8dddcf7a2550e63bdb6b4bb664ec25aeb8635f9277183f01"
	I1206 10:29:38.018008  373742 cri.go:89] found id: "6bfecd83e062db176f5124191f88157b58c2a91ba34d40d6c82c9fbd3c6fee47"
	I1206 10:29:38.018012  373742 cri.go:89] found id: ""
	I1206 10:29:38.018072  373742 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 10:29:38.035800  373742 out.go:203] 
	W1206 10:29:38.038828  373742 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:29:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:29:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 10:29:38.038880  373742 out.go:285] * 
	* 
	W1206 10:29:38.044365  373742 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:29:38.047426  373742 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-545880 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.53s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (143.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-545880 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-545880 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-545880 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [56ea7f15-b60b-4ef6-91c2-13f1c93014c2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [56ea7f15-b60b-4ef6-91c2-13f1c93014c2] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003616201s
I1206 10:29:31.207593  364855 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-545880 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-545880 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.590817674s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-545880 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-545880 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-545880
helpers_test.go:243: (dbg) docker inspect addons-545880:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a93a155d5df0f69f3f6899c99e43c4171f82074157d733e88fc9accf1c14279f",
	        "Created": "2025-12-06T10:26:24.660997861Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 366239,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T10:26:24.727779728Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/a93a155d5df0f69f3f6899c99e43c4171f82074157d733e88fc9accf1c14279f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a93a155d5df0f69f3f6899c99e43c4171f82074157d733e88fc9accf1c14279f/hostname",
	        "HostsPath": "/var/lib/docker/containers/a93a155d5df0f69f3f6899c99e43c4171f82074157d733e88fc9accf1c14279f/hosts",
	        "LogPath": "/var/lib/docker/containers/a93a155d5df0f69f3f6899c99e43c4171f82074157d733e88fc9accf1c14279f/a93a155d5df0f69f3f6899c99e43c4171f82074157d733e88fc9accf1c14279f-json.log",
	        "Name": "/addons-545880",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-545880:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-545880",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a93a155d5df0f69f3f6899c99e43c4171f82074157d733e88fc9accf1c14279f",
	                "LowerDir": "/var/lib/docker/overlay2/2a20fabcb20792bc91f77cc1658dd9acd97c3c9361377ddd08683b2d2c3427d3-init/diff:/var/lib/docker/overlay2/5011226d55616c9977b14c1fe617d1302fe59373df05ce8ec6e21b79143a1c57/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2a20fabcb20792bc91f77cc1658dd9acd97c3c9361377ddd08683b2d2c3427d3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2a20fabcb20792bc91f77cc1658dd9acd97c3c9361377ddd08683b2d2c3427d3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2a20fabcb20792bc91f77cc1658dd9acd97c3c9361377ddd08683b2d2c3427d3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-545880",
	                "Source": "/var/lib/docker/volumes/addons-545880/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-545880",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-545880",
	                "name.minikube.sigs.k8s.io": "addons-545880",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "435281ad4d0520d430fa5da9ba4f57070020402866438f00f22ba5bdcfb57a1d",
	            "SandboxKey": "/var/run/docker/netns/435281ad4d05",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-545880": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:76:fe:d0:d1:fe",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b5ca69fa4acc00d6f9d4dbd4b897fc14fad38da4fafffd1234ebd25cc9478e9c",
	                    "EndpointID": "4e5a0228af364d4e6cf94ca1f26538ee6e7d6addf7f6f4e10b40700a3f0cf517",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-545880",
	                        "a93a155d5df0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-545880 -n addons-545880
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-545880 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-545880 logs -n 25: (1.547256104s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-566856                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-566856 │ jenkins │ v1.37.0 │ 06 Dec 25 10:25 UTC │ 06 Dec 25 10:25 UTC │
	│ start   │ --download-only -p binary-mirror-657674 --alsologtostderr --binary-mirror http://127.0.0.1:46333 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-657674   │ jenkins │ v1.37.0 │ 06 Dec 25 10:25 UTC │                     │
	│ delete  │ -p binary-mirror-657674                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-657674   │ jenkins │ v1.37.0 │ 06 Dec 25 10:25 UTC │ 06 Dec 25 10:25 UTC │
	│ addons  │ enable dashboard -p addons-545880                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-545880          │ jenkins │ v1.37.0 │ 06 Dec 25 10:25 UTC │                     │
	│ addons  │ disable dashboard -p addons-545880                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-545880          │ jenkins │ v1.37.0 │ 06 Dec 25 10:25 UTC │                     │
	│ start   │ -p addons-545880 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-545880          │ jenkins │ v1.37.0 │ 06 Dec 25 10:25 UTC │ 06 Dec 25 10:28 UTC │
	│ addons  │ addons-545880 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-545880          │ jenkins │ v1.37.0 │ 06 Dec 25 10:28 UTC │                     │
	│ addons  │ addons-545880 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-545880          │ jenkins │ v1.37.0 │ 06 Dec 25 10:28 UTC │                     │
	│ addons  │ enable headlamp -p addons-545880 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-545880          │ jenkins │ v1.37.0 │ 06 Dec 25 10:28 UTC │                     │
	│ addons  │ addons-545880 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-545880          │ jenkins │ v1.37.0 │ 06 Dec 25 10:28 UTC │                     │
	│ ip      │ addons-545880 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-545880          │ jenkins │ v1.37.0 │ 06 Dec 25 10:29 UTC │ 06 Dec 25 10:29 UTC │
	│ addons  │ addons-545880 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-545880          │ jenkins │ v1.37.0 │ 06 Dec 25 10:29 UTC │                     │
	│ addons  │ addons-545880 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-545880          │ jenkins │ v1.37.0 │ 06 Dec 25 10:29 UTC │                     │
	│ addons  │ addons-545880 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-545880          │ jenkins │ v1.37.0 │ 06 Dec 25 10:29 UTC │                     │
	│ ssh     │ addons-545880 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-545880          │ jenkins │ v1.37.0 │ 06 Dec 25 10:29 UTC │                     │
	│ addons  │ addons-545880 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-545880          │ jenkins │ v1.37.0 │ 06 Dec 25 10:29 UTC │                     │
	│ addons  │ addons-545880 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-545880          │ jenkins │ v1.37.0 │ 06 Dec 25 10:29 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-545880                                                                                                                                                                                                                                                                                                                                                                                           │ addons-545880          │ jenkins │ v1.37.0 │ 06 Dec 25 10:29 UTC │ 06 Dec 25 10:29 UTC │
	│ addons  │ addons-545880 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-545880          │ jenkins │ v1.37.0 │ 06 Dec 25 10:29 UTC │                     │
	│ addons  │ addons-545880 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-545880          │ jenkins │ v1.37.0 │ 06 Dec 25 10:29 UTC │                     │
	│ addons  │ addons-545880 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-545880          │ jenkins │ v1.37.0 │ 06 Dec 25 10:29 UTC │                     │
	│ ssh     │ addons-545880 ssh cat /opt/local-path-provisioner/pvc-e0007cf2-fb61-4410-a0d1-11cea273d032_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-545880          │ jenkins │ v1.37.0 │ 06 Dec 25 10:29 UTC │ 06 Dec 25 10:29 UTC │
	│ addons  │ addons-545880 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-545880          │ jenkins │ v1.37.0 │ 06 Dec 25 10:29 UTC │                     │
	│ addons  │ addons-545880 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-545880          │ jenkins │ v1.37.0 │ 06 Dec 25 10:30 UTC │                     │
	│ ip      │ addons-545880 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-545880          │ jenkins │ v1.37.0 │ 06 Dec 25 10:31 UTC │ 06 Dec 25 10:31 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:25:59
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:25:59.831901  365843 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:25:59.832034  365843 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:25:59.832044  365843 out.go:374] Setting ErrFile to fd 2...
	I1206 10:25:59.832050  365843 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:25:59.832321  365843 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 10:25:59.832813  365843 out.go:368] Setting JSON to false
	I1206 10:25:59.833652  365843 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":7711,"bootTime":1765009049,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 10:25:59.833731  365843 start.go:143] virtualization:  
	I1206 10:25:59.837083  365843 out.go:179] * [addons-545880] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:25:59.840935  365843 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 10:25:59.841048  365843 notify.go:221] Checking for updates...
	I1206 10:25:59.846845  365843 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:25:59.849772  365843 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 10:25:59.852744  365843 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-362985/.minikube
	I1206 10:25:59.855594  365843 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:25:59.858496  365843 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:25:59.861507  365843 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:25:59.889734  365843 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:25:59.889878  365843 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:25:59.948079  365843 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-06 10:25:59.938847003 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:25:59.948197  365843 docker.go:319] overlay module found
	I1206 10:25:59.951279  365843 out.go:179] * Using the docker driver based on user configuration
	I1206 10:25:59.954241  365843 start.go:309] selected driver: docker
	I1206 10:25:59.954267  365843 start.go:927] validating driver "docker" against <nil>
	I1206 10:25:59.954287  365843 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:25:59.955056  365843 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:26:00.076464  365843 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-06 10:26:00.034895678 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:26:00.076679  365843 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 10:26:00.076929  365843 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 10:26:00.083865  365843 out.go:179] * Using Docker driver with root privileges
	I1206 10:26:00.086971  365843 cni.go:84] Creating CNI manager for ""
	I1206 10:26:00.087065  365843 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:26:00.087080  365843 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 10:26:00.087177  365843 start.go:353] cluster config:
	{Name:addons-545880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-545880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1206 10:26:00.092447  365843 out.go:179] * Starting "addons-545880" primary control-plane node in "addons-545880" cluster
	I1206 10:26:00.095350  365843 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 10:26:00.098397  365843 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:26:00.101333  365843 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:26:00.101380  365843 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 10:26:00.101568  365843 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1206 10:26:00.101586  365843 cache.go:65] Caching tarball of preloaded images
	I1206 10:26:00.101689  365843 preload.go:238] Found /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1206 10:26:00.101707  365843 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1206 10:26:00.102099  365843 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/config.json ...
	I1206 10:26:00.102136  365843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/config.json: {Name:mk7e12d5acc9c9b0ed556f29d1343f0847944d5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:26:00.161923  365843 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 to local cache
	I1206 10:26:00.162079  365843 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory
	I1206 10:26:00.162112  365843 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory, skipping pull
	I1206 10:26:00.162117  365843 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in cache, skipping pull
	I1206 10:26:00.162126  365843 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 as a tarball
	I1206 10:26:00.162131  365843 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 from local cache
	I1206 10:26:18.511593  365843 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 from cached tarball
	I1206 10:26:18.511632  365843 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:26:18.511687  365843 start.go:360] acquireMachinesLock for addons-545880: {Name:mkbddccc20b56c014d20069484dc6aca478c0df0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:26:18.511811  365843 start.go:364] duration metric: took 104.551µs to acquireMachinesLock for "addons-545880"
	I1206 10:26:18.511837  365843 start.go:93] Provisioning new machine with config: &{Name:addons-545880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-545880 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 10:26:18.511907  365843 start.go:125] createHost starting for "" (driver="docker")
	I1206 10:26:18.515412  365843 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1206 10:26:18.515659  365843 start.go:159] libmachine.API.Create for "addons-545880" (driver="docker")
	I1206 10:26:18.515700  365843 client.go:173] LocalClient.Create starting
	I1206 10:26:18.515815  365843 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem
	I1206 10:26:18.697389  365843 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem
	I1206 10:26:18.943080  365843 cli_runner.go:164] Run: docker network inspect addons-545880 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 10:26:18.967774  365843 cli_runner.go:211] docker network inspect addons-545880 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 10:26:18.967859  365843 network_create.go:284] running [docker network inspect addons-545880] to gather additional debugging logs...
	I1206 10:26:18.967882  365843 cli_runner.go:164] Run: docker network inspect addons-545880
	W1206 10:26:18.983985  365843 cli_runner.go:211] docker network inspect addons-545880 returned with exit code 1
	I1206 10:26:18.984017  365843 network_create.go:287] error running [docker network inspect addons-545880]: docker network inspect addons-545880: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-545880 not found
	I1206 10:26:18.984038  365843 network_create.go:289] output of [docker network inspect addons-545880]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-545880 not found
	
	** /stderr **
	I1206 10:26:18.984134  365843 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:26:19.001199  365843 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001997bb0}
	I1206 10:26:19.001265  365843 network_create.go:124] attempt to create docker network addons-545880 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1206 10:26:19.001337  365843 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-545880 addons-545880
	I1206 10:26:19.068372  365843 network_create.go:108] docker network addons-545880 192.168.49.0/24 created
	I1206 10:26:19.068406  365843 kic.go:121] calculated static IP "192.168.49.2" for the "addons-545880" container
	I1206 10:26:19.068511  365843 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 10:26:19.084852  365843 cli_runner.go:164] Run: docker volume create addons-545880 --label name.minikube.sigs.k8s.io=addons-545880 --label created_by.minikube.sigs.k8s.io=true
	I1206 10:26:19.103544  365843 oci.go:103] Successfully created a docker volume addons-545880
	I1206 10:26:19.103648  365843 cli_runner.go:164] Run: docker run --rm --name addons-545880-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-545880 --entrypoint /usr/bin/test -v addons-545880:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 10:26:20.646987  365843 cli_runner.go:217] Completed: docker run --rm --name addons-545880-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-545880 --entrypoint /usr/bin/test -v addons-545880:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib: (1.543302795s)
	I1206 10:26:20.647022  365843 oci.go:107] Successfully prepared a docker volume addons-545880
	I1206 10:26:20.647066  365843 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 10:26:20.647092  365843 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 10:26:20.647161  365843 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-545880:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 10:26:24.583440  365843 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-545880:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.936235065s)
	I1206 10:26:24.583476  365843 kic.go:203] duration metric: took 3.936392629s to extract preloaded images to volume ...
	W1206 10:26:24.583615  365843 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1206 10:26:24.583724  365843 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 10:26:24.646036  365843 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-545880 --name addons-545880 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-545880 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-545880 --network addons-545880 --ip 192.168.49.2 --volume addons-545880:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1206 10:26:24.957434  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Running}}
	I1206 10:26:24.980192  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:26:25.001898  365843 cli_runner.go:164] Run: docker exec addons-545880 stat /var/lib/dpkg/alternatives/iptables
	I1206 10:26:25.056757  365843 oci.go:144] the created container "addons-545880" has a running status.
	I1206 10:26:25.056786  365843 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa...
	I1206 10:26:25.314150  365843 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 10:26:25.334819  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:26:25.356817  365843 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 10:26:25.356837  365843 kic_runner.go:114] Args: [docker exec --privileged addons-545880 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 10:26:25.425964  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:26:25.455070  365843 machine.go:94] provisionDockerMachine start ...
	I1206 10:26:25.455174  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:26:25.482546  365843 main.go:143] libmachine: Using SSH client type: native
	I1206 10:26:25.482886  365843 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1206 10:26:25.482900  365843 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:26:25.483648  365843 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1206 10:26:28.634993  365843 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-545880
	
	I1206 10:26:28.635016  365843 ubuntu.go:182] provisioning hostname "addons-545880"
	I1206 10:26:28.635080  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:26:28.652678  365843 main.go:143] libmachine: Using SSH client type: native
	I1206 10:26:28.652999  365843 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1206 10:26:28.653015  365843 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-545880 && echo "addons-545880" | sudo tee /etc/hostname
	I1206 10:26:28.812977  365843 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-545880
	
	I1206 10:26:28.813063  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:26:28.831988  365843 main.go:143] libmachine: Using SSH client type: native
	I1206 10:26:28.832328  365843 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1206 10:26:28.832350  365843 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-545880' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-545880/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-545880' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:26:28.983825  365843 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:26:28.983857  365843 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-362985/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-362985/.minikube}
	I1206 10:26:28.983877  365843 ubuntu.go:190] setting up certificates
	I1206 10:26:28.983887  365843 provision.go:84] configureAuth start
	I1206 10:26:28.983946  365843 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-545880
	I1206 10:26:29.000386  365843 provision.go:143] copyHostCerts
	I1206 10:26:29.000482  365843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem (1082 bytes)
	I1206 10:26:29.000625  365843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem (1123 bytes)
	I1206 10:26:29.000690  365843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem (1679 bytes)
	I1206 10:26:29.000947  365843 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem org=jenkins.addons-545880 san=[127.0.0.1 192.168.49.2 addons-545880 localhost minikube]
	I1206 10:26:29.513592  365843 provision.go:177] copyRemoteCerts
	I1206 10:26:29.513666  365843 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:26:29.513708  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:26:29.531256  365843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:26:29.635406  365843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 10:26:29.653517  365843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1206 10:26:29.672860  365843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 10:26:29.692361  365843 provision.go:87] duration metric: took 708.449519ms to configureAuth
	I1206 10:26:29.692436  365843 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:26:29.692646  365843 config.go:182] Loaded profile config "addons-545880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:26:29.692762  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:26:29.710530  365843 main.go:143] libmachine: Using SSH client type: native
	I1206 10:26:29.710851  365843 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1206 10:26:29.710875  365843 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 10:26:30.051304  365843 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 10:26:30.051333  365843 machine.go:97] duration metric: took 4.596242148s to provisionDockerMachine
	I1206 10:26:30.051346  365843 client.go:176] duration metric: took 11.535634515s to LocalClient.Create
	I1206 10:26:30.051360  365843 start.go:167] duration metric: took 11.535703529s to libmachine.API.Create "addons-545880"
	I1206 10:26:30.051368  365843 start.go:293] postStartSetup for "addons-545880" (driver="docker")
	I1206 10:26:30.051412  365843 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:26:30.051618  365843 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:26:30.051671  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:26:30.094193  365843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:26:30.236130  365843 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:26:30.240076  365843 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:26:30.240106  365843 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:26:30.240119  365843 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-362985/.minikube/addons for local assets ...
	I1206 10:26:30.240212  365843 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-362985/.minikube/files for local assets ...
	I1206 10:26:30.240237  365843 start.go:296] duration metric: took 188.863853ms for postStartSetup
	I1206 10:26:30.240613  365843 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-545880
	I1206 10:26:30.259182  365843 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/config.json ...
	I1206 10:26:30.259533  365843 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:26:30.259588  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:26:30.277753  365843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:26:30.381242  365843 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:26:30.386082  365843 start.go:128] duration metric: took 11.874158588s to createHost
	I1206 10:26:30.386110  365843 start.go:83] releasing machines lock for "addons-545880", held for 11.874290083s
	I1206 10:26:30.386180  365843 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-545880
	I1206 10:26:30.402938  365843 ssh_runner.go:195] Run: cat /version.json
	I1206 10:26:30.402964  365843 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:26:30.402993  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:26:30.403034  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:26:30.421523  365843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:26:30.425012  365843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:26:30.611263  365843 ssh_runner.go:195] Run: systemctl --version
	I1206 10:26:30.617625  365843 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 10:26:30.659043  365843 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 10:26:30.663560  365843 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:26:30.663640  365843 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:26:30.694683  365843 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1206 10:26:30.694711  365843 start.go:496] detecting cgroup driver to use...
	I1206 10:26:30.694746  365843 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:26:30.694799  365843 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 10:26:30.712307  365843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 10:26:30.725382  365843 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:26:30.725445  365843 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:26:30.743114  365843 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:26:30.762434  365843 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:26:30.878831  365843 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:26:31.009808  365843 docker.go:234] disabling docker service ...
	I1206 10:26:31.009943  365843 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:26:31.032972  365843 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:26:31.046702  365843 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:26:31.166275  365843 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:26:31.283189  365843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:26:31.297096  365843 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:26:31.311898  365843 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 10:26:31.311971  365843 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:26:31.321664  365843 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 10:26:31.321738  365843 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:26:31.331290  365843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:26:31.340666  365843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:26:31.350131  365843 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:26:31.358805  365843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:26:31.367848  365843 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:26:31.381670  365843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:26:31.391503  365843 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:26:31.399507  365843 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:26:31.407141  365843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:26:31.513722  365843 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 10:26:31.675113  365843 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 10:26:31.675276  365843 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 10:26:31.679130  365843 start.go:564] Will wait 60s for crictl version
	I1206 10:26:31.679230  365843 ssh_runner.go:195] Run: which crictl
	I1206 10:26:31.682765  365843 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:26:31.708304  365843 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 10:26:31.708449  365843 ssh_runner.go:195] Run: crio --version
	I1206 10:26:31.738317  365843 ssh_runner.go:195] Run: crio --version
	I1206 10:26:31.773296  365843 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1206 10:26:31.776156  365843 cli_runner.go:164] Run: docker network inspect addons-545880 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:26:31.792593  365843 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 10:26:31.796338  365843 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 10:26:31.806373  365843 kubeadm.go:884] updating cluster {Name:addons-545880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-545880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:26:31.806495  365843 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 10:26:31.806552  365843 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:26:31.845347  365843 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:26:31.845373  365843 crio.go:433] Images already preloaded, skipping extraction
	I1206 10:26:31.845439  365843 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:26:31.871214  365843 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:26:31.871242  365843 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:26:31.871250  365843 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1206 10:26:31.871357  365843 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-545880 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-545880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:26:31.871507  365843 ssh_runner.go:195] Run: crio config
	I1206 10:26:31.923043  365843 cni.go:84] Creating CNI manager for ""
	I1206 10:26:31.923075  365843 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:26:31.923095  365843 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 10:26:31.923119  365843 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-545880 NodeName:addons-545880 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:26:31.923277  365843 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-545880"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:26:31.923381  365843 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 10:26:31.931106  365843 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:26:31.931228  365843 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:26:31.938984  365843 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1206 10:26:31.952068  365843 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 10:26:31.965345  365843 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1206 10:26:31.978869  365843 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:26:31.982501  365843 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 10:26:31.993850  365843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:26:32.104139  365843 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:26:32.120124  365843 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880 for IP: 192.168.49.2
	I1206 10:26:32.120199  365843 certs.go:195] generating shared ca certs ...
	I1206 10:26:32.120230  365843 certs.go:227] acquiring lock for ca certs: {Name:mke2ec61a37b6f3abbcbeb9abd23d6a19d011dd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:26:32.120419  365843 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.key
	I1206 10:26:32.254819  365843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt ...
	I1206 10:26:32.254857  365843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt: {Name:mkf771137823ff623c6ce260a556cc9c6ad707d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:26:32.255088  365843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-362985/.minikube/ca.key ...
	I1206 10:26:32.255105  365843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/.minikube/ca.key: {Name:mk197c4654e2009e868e924392a5b050891b363d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:26:32.255198  365843 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.key
	I1206 10:26:32.685089  365843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.crt ...
	I1206 10:26:32.685131  365843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.crt: {Name:mk4b6bd64e89bf8d549094c3ac332c5535e71c3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:26:32.685320  365843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.key ...
	I1206 10:26:32.685335  365843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.key: {Name:mk2a9ab3182afa855d079fe41df0313a5e14ad3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:26:32.685408  365843 certs.go:257] generating profile certs ...
	I1206 10:26:32.685470  365843 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.key
	I1206 10:26:32.685485  365843 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.crt with IP's: []
	I1206 10:26:33.138918  365843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.crt ...
	I1206 10:26:33.138954  365843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.crt: {Name:mkb1f18f04909fc72b9bf1587c10de224fd47072 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:26:33.139144  365843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.key ...
	I1206 10:26:33.139157  365843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.key: {Name:mk166fa649e391da7c0ce7f16eb96f9706703743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:26:33.139240  365843 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/apiserver.key.1afdede4
	I1206 10:26:33.139262  365843 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/apiserver.crt.1afdede4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1206 10:26:34.173485  365843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/apiserver.crt.1afdede4 ...
	I1206 10:26:34.173521  365843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/apiserver.crt.1afdede4: {Name:mk0415fbe505048311acc9e1ccabd0da9ef1010b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:26:34.173710  365843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/apiserver.key.1afdede4 ...
	I1206 10:26:34.173724  365843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/apiserver.key.1afdede4: {Name:mke5528a50033758cc15a754271a4aa845ad8d3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:26:34.173803  365843 certs.go:382] copying /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/apiserver.crt.1afdede4 -> /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/apiserver.crt
	I1206 10:26:34.173885  365843 certs.go:386] copying /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/apiserver.key.1afdede4 -> /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/apiserver.key
	I1206 10:26:34.173942  365843 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/proxy-client.key
	I1206 10:26:34.173963  365843 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/proxy-client.crt with IP's: []
	I1206 10:26:34.326260  365843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/proxy-client.crt ...
	I1206 10:26:34.326291  365843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/proxy-client.crt: {Name:mk9b6c0f834a5853575a3384d35d223d52096489 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:26:34.327229  365843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/proxy-client.key ...
	I1206 10:26:34.327248  365843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/proxy-client.key: {Name:mk46d66594947c08bd137d9b69279b27e86a64c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:26:34.327496  365843 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 10:26:34.327554  365843 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem (1082 bytes)
	I1206 10:26:34.327590  365843 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:26:34.327621  365843 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem (1679 bytes)
	I1206 10:26:34.328194  365843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:26:34.347478  365843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 10:26:34.366452  365843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:26:34.386403  365843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 10:26:34.405102  365843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1206 10:26:34.422744  365843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 10:26:34.440462  365843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:26:34.459053  365843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 10:26:34.481101  365843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:26:34.500117  365843 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:26:34.516869  365843 ssh_runner.go:195] Run: openssl version
	I1206 10:26:34.527538  365843 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:26:34.536477  365843 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:26:34.544345  365843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:26:34.548207  365843 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:26 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:26:34.548272  365843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:26:34.589739  365843 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:26:34.597397  365843 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 10:26:34.604884  365843 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:26:34.608686  365843 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 10:26:34.608739  365843 kubeadm.go:401] StartCluster: {Name:addons-545880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-545880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:26:34.608826  365843 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:26:34.608898  365843 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:26:34.638249  365843 cri.go:89] found id: ""
	I1206 10:26:34.638320  365843 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:26:34.646396  365843 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 10:26:34.654345  365843 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:26:34.654408  365843 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:26:34.662204  365843 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:26:34.662224  365843 kubeadm.go:158] found existing configuration files:
	
	I1206 10:26:34.662293  365843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 10:26:34.669833  365843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:26:34.669898  365843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:26:34.677263  365843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 10:26:34.685094  365843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:26:34.685159  365843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:26:34.692418  365843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 10:26:34.700411  365843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:26:34.700498  365843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:26:34.707954  365843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 10:26:34.715702  365843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:26:34.715823  365843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:26:34.723328  365843 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:26:34.764310  365843 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1206 10:26:34.764642  365843 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:26:34.793983  365843 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:26:34.794076  365843 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:26:34.794145  365843 kubeadm.go:319] OS: Linux
	I1206 10:26:34.794210  365843 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:26:34.794299  365843 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:26:34.794371  365843 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:26:34.794492  365843 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:26:34.794578  365843 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:26:34.794649  365843 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:26:34.794720  365843 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:26:34.794789  365843 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:26:34.794865  365843 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:26:34.862625  365843 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:26:34.862815  365843 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:26:34.862963  365843 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:26:34.870565  365843 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:26:34.873867  365843 out.go:252]   - Generating certificates and keys ...
	I1206 10:26:34.874290  365843 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:26:34.874388  365843 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:26:35.882071  365843 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 10:26:36.430227  365843 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 10:26:36.654976  365843 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 10:26:37.161112  365843 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 10:26:37.556781  365843 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 10:26:37.557172  365843 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-545880 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1206 10:26:38.246505  365843 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 10:26:38.247091  365843 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-545880 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1206 10:26:38.635365  365843 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 10:26:38.896700  365843 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 10:26:39.122752  365843 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 10:26:39.122833  365843 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:26:39.481677  365843 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:26:40.114833  365843 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:26:40.618194  365843 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:26:41.653071  365843 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:26:42.157982  365843 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:26:42.158723  365843 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:26:42.162918  365843 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:26:42.172800  365843 out.go:252]   - Booting up control plane ...
	I1206 10:26:42.172923  365843 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:26:42.173242  365843 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:26:42.173320  365843 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:26:42.193704  365843 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:26:42.194114  365843 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:26:42.202175  365843 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:26:42.202666  365843 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:26:42.202717  365843 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:26:42.347864  365843 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:26:42.347985  365843 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:26:43.343471  365843 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000757071s
	I1206 10:26:43.347408  365843 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 10:26:43.347525  365843 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1206 10:26:43.347628  365843 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 10:26:43.347719  365843 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 10:26:46.305915  365843 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.95796749s
	I1206 10:26:47.502343  365843 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.155039186s
	I1206 10:26:49.349238  365843 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001911377s
	I1206 10:26:49.380240  365843 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 10:26:49.399646  365843 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 10:26:49.414630  365843 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 10:26:49.414859  365843 kubeadm.go:319] [mark-control-plane] Marking the node addons-545880 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 10:26:49.431137  365843 kubeadm.go:319] [bootstrap-token] Using token: r0k11y.th9qih0pbh9vle2v
	I1206 10:26:49.434234  365843 out.go:252]   - Configuring RBAC rules ...
	I1206 10:26:49.434385  365843 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 10:26:49.439623  365843 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 10:26:49.450425  365843 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 10:26:49.455092  365843 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 10:26:49.459448  365843 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 10:26:49.464084  365843 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 10:26:49.755716  365843 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 10:26:50.196175  365843 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1206 10:26:50.757506  365843 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1206 10:26:50.757530  365843 kubeadm.go:319] 
	I1206 10:26:50.757593  365843 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1206 10:26:50.757600  365843 kubeadm.go:319] 
	I1206 10:26:50.757694  365843 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1206 10:26:50.757714  365843 kubeadm.go:319] 
	I1206 10:26:50.757740  365843 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1206 10:26:50.757798  365843 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 10:26:50.757855  365843 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 10:26:50.757863  365843 kubeadm.go:319] 
	I1206 10:26:50.757931  365843 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1206 10:26:50.757940  365843 kubeadm.go:319] 
	I1206 10:26:50.757985  365843 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 10:26:50.757994  365843 kubeadm.go:319] 
	I1206 10:26:50.758043  365843 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1206 10:26:50.758117  365843 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 10:26:50.758185  365843 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 10:26:50.758191  365843 kubeadm.go:319] 
	I1206 10:26:50.758270  365843 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 10:26:50.758347  365843 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1206 10:26:50.758357  365843 kubeadm.go:319] 
	I1206 10:26:50.758446  365843 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token r0k11y.th9qih0pbh9vle2v \
	I1206 10:26:50.758547  365843 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:89a0fbd3aa916cfb970075c0d704943ded4fe0d81a4b725dbb90c39f29d66bfe \
	I1206 10:26:50.758568  365843 kubeadm.go:319] 	--control-plane 
	I1206 10:26:50.758574  365843 kubeadm.go:319] 
	I1206 10:26:50.758655  365843 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1206 10:26:50.758662  365843 kubeadm.go:319] 
	I1206 10:26:50.758740  365843 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token r0k11y.th9qih0pbh9vle2v \
	I1206 10:26:50.758839  365843 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:89a0fbd3aa916cfb970075c0d704943ded4fe0d81a4b725dbb90c39f29d66bfe 
	I1206 10:26:50.762577  365843 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1206 10:26:50.762798  365843 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 10:26:50.762901  365843 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:26:50.762923  365843 cni.go:84] Creating CNI manager for ""
	I1206 10:26:50.762933  365843 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:26:50.766141  365843 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1206 10:26:50.769066  365843 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1206 10:26:50.773130  365843 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1206 10:26:50.773156  365843 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1206 10:26:50.786635  365843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1206 10:26:51.125762  365843 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 10:26:51.125858  365843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:26:51.125917  365843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-545880 minikube.k8s.io/updated_at=2025_12_06T10_26_51_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4 minikube.k8s.io/name=addons-545880 minikube.k8s.io/primary=true
	I1206 10:26:51.295584  365843 ops.go:34] apiserver oom_adj: -16
	I1206 10:26:51.295711  365843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:26:51.795823  365843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:26:52.295852  365843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:26:52.796448  365843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:26:53.296235  365843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:26:53.796440  365843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:26:54.296609  365843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:26:54.795906  365843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:26:55.295904  365843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:26:55.394733  365843 kubeadm.go:1114] duration metric: took 4.268933936s to wait for elevateKubeSystemPrivileges
	I1206 10:26:55.394771  365843 kubeadm.go:403] duration metric: took 20.786038447s to StartCluster
	I1206 10:26:55.394789  365843 settings.go:142] acquiring lock: {Name:mk789e01bfd4ab9fa1e2a8415fa99b570b26926a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:26:55.394908  365843 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 10:26:55.395316  365843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/kubeconfig: {Name:mk779651834cfbdc6f0b5e8f5a9abc0f05106181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:26:55.395567  365843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 10:26:55.395596  365843 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 10:26:55.395834  365843 config.go:182] Loaded profile config "addons-545880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:26:55.395866  365843 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1206 10:26:55.395942  365843 addons.go:70] Setting yakd=true in profile "addons-545880"
	I1206 10:26:55.395958  365843 addons.go:239] Setting addon yakd=true in "addons-545880"
	I1206 10:26:55.395984  365843 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:26:55.396007  365843 addons.go:70] Setting inspektor-gadget=true in profile "addons-545880"
	I1206 10:26:55.396024  365843 addons.go:239] Setting addon inspektor-gadget=true in "addons-545880"
	I1206 10:26:55.396044  365843 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:26:55.396446  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:26:55.396539  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:26:55.396868  365843 addons.go:70] Setting metrics-server=true in profile "addons-545880"
	I1206 10:26:55.396889  365843 addons.go:239] Setting addon metrics-server=true in "addons-545880"
	I1206 10:26:55.396911  365843 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:26:55.397330  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:26:55.399757  365843 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-545880"
	I1206 10:26:55.399792  365843 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-545880"
	I1206 10:26:55.399929  365843 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:26:55.400413  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:26:55.400682  365843 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-545880"
	I1206 10:26:55.403919  365843 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-545880"
	I1206 10:26:55.403958  365843 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:26:55.404427  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:26:55.403425  365843 addons.go:70] Setting registry=true in profile "addons-545880"
	I1206 10:26:55.413996  365843 addons.go:239] Setting addon registry=true in "addons-545880"
	I1206 10:26:55.414059  365843 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:26:55.414690  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:26:55.403441  365843 addons.go:70] Setting registry-creds=true in profile "addons-545880"
	I1206 10:26:55.423408  365843 addons.go:239] Setting addon registry-creds=true in "addons-545880"
	I1206 10:26:55.423519  365843 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:26:55.403595  365843 addons.go:70] Setting storage-provisioner=true in profile "addons-545880"
	I1206 10:26:55.403601  365843 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-545880"
	I1206 10:26:55.403605  365843 addons.go:70] Setting volcano=true in profile "addons-545880"
	I1206 10:26:55.403608  365843 addons.go:70] Setting volumesnapshots=true in profile "addons-545880"
	I1206 10:26:55.403700  365843 out.go:179] * Verifying Kubernetes components...
	I1206 10:26:55.403712  365843 addons.go:70] Setting gcp-auth=true in profile "addons-545880"
	I1206 10:26:55.403717  365843 addons.go:70] Setting cloud-spanner=true in profile "addons-545880"
	I1206 10:26:55.403727  365843 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-545880"
	I1206 10:26:55.403731  365843 addons.go:70] Setting default-storageclass=true in profile "addons-545880"
	I1206 10:26:55.403735  365843 addons.go:70] Setting ingress-dns=true in profile "addons-545880"
	I1206 10:26:55.403738  365843 addons.go:70] Setting ingress=true in profile "addons-545880"
	I1206 10:26:55.430347  365843 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-545880"
	I1206 10:26:55.430372  365843 addons.go:239] Setting addon volumesnapshots=true in "addons-545880"
	I1206 10:26:55.431554  365843 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:26:55.433741  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:26:55.430984  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:26:55.430995  365843 addons.go:239] Setting addon storage-provisioner=true in "addons-545880"
	I1206 10:26:55.451763  365843 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:26:55.431005  365843 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-545880"
	I1206 10:26:55.475834  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:26:55.431030  365843 addons.go:239] Setting addon volcano=true in "addons-545880"
	I1206 10:26:55.475977  365843 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:26:55.431043  365843 mustload.go:66] Loading cluster: addons-545880
	I1206 10:26:55.476362  365843 config.go:182] Loaded profile config "addons-545880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:26:55.431106  365843 addons.go:239] Setting addon cloud-spanner=true in "addons-545880"
	I1206 10:26:55.477618  365843 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:26:55.478143  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:26:55.495791  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:26:55.431140  365843 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-545880"
	I1206 10:26:55.443323  365843 addons.go:239] Setting addon ingress-dns=true in "addons-545880"
	I1206 10:26:55.443366  365843 addons.go:239] Setting addon ingress=true in "addons-545880"
	I1206 10:26:55.443467  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:26:55.498157  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:26:55.535563  365843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:26:55.568757  365843 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1206 10:26:55.571838  365843 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1206 10:26:55.571864  365843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1206 10:26:55.571936  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:26:55.584748  365843 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:26:55.588684  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:26:55.595842  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:26:55.597368  365843 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:26:55.597872  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:26:55.611488  365843 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1206 10:26:55.611673  365843 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:26:55.612197  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:26:55.619176  365843 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1206 10:26:55.619340  365843 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1206 10:26:55.619354  365843 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1206 10:26:55.619493  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:26:55.643508  365843 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1206 10:26:55.646784  365843 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1206 10:26:55.646807  365843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1206 10:26:55.646874  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:26:55.678766  365843 out.go:179]   - Using image docker.io/registry:3.0.0
	I1206 10:26:55.681775  365843 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1206 10:26:55.684655  365843 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1206 10:26:55.684682  365843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1206 10:26:55.684765  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:26:55.710808  365843 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1206 10:26:55.716836  365843 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1206 10:26:55.716859  365843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1206 10:26:55.716933  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:26:55.728970  365843 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1206 10:26:55.728998  365843 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1206 10:26:55.729084  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:26:55.746545  365843 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:26:55.750924  365843 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-545880"
	I1206 10:26:55.751045  365843 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:26:55.751808  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:26:55.763890  365843 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1206 10:26:55.787751  365843 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 10:26:55.790610  365843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:26:55.798765  365843 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1206 10:26:55.798785  365843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1206 10:26:55.798847  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:26:55.799111  365843 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:26:55.799137  365843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 10:26:55.799217  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:26:55.788023  365843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 10:26:55.819336  365843 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1206 10:26:55.819697  365843 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1206 10:26:55.823417  365843 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1206 10:26:55.823442  365843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1206 10:26:55.823512  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:26:55.823695  365843 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1206 10:26:55.823705  365843 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1206 10:26:55.823744  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:26:55.854980  365843 addons.go:239] Setting addon default-storageclass=true in "addons-545880"
	I1206 10:26:55.855021  365843 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:26:55.855504  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:26:55.883443  365843 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1206 10:26:55.889126  365843 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1206 10:26:55.892240  365843 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1206 10:26:55.897148  365843 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1206 10:26:55.901241  365843 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1206 10:26:55.902264  365843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:26:55.905454  365843 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1206 10:26:55.926916  365843 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1206 10:26:55.927140  365843 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1206 10:26:55.932348  365843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:26:55.958632  365843 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1206 10:26:55.962195  365843 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1206 10:26:55.964049  365843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:26:55.968399  365843 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1206 10:26:55.968475  365843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1206 10:26:55.968571  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	W1206 10:26:55.969102  365843 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1206 10:26:55.982324  365843 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1206 10:26:55.985132  365843 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1206 10:26:55.985205  365843 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1206 10:26:55.985309  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:26:55.992142  365843 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:26:56.017521  365843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:26:56.027674  365843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:26:56.048289  365843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:26:56.050817  365843 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1206 10:26:56.056548  365843 out.go:179]   - Using image docker.io/busybox:stable
	I1206 10:26:56.059240  365843 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1206 10:26:56.060352  365843 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1206 10:26:56.060380  365843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1206 10:26:56.060452  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:26:56.061394  365843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:26:56.068100  365843 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1206 10:26:56.068126  365843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1206 10:26:56.068197  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:26:56.068854  365843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:26:56.082814  365843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:26:56.136276  365843 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 10:26:56.136304  365843 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 10:26:56.136361  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:26:56.154518  365843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:26:56.167592  365843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:26:56.180964  365843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	W1206 10:26:56.182126  365843 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1206 10:26:56.182156  365843 retry.go:31] will retry after 311.449807ms: ssh: handshake failed: EOF
	I1206 10:26:56.190252  365843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:26:56.206629  365843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	W1206 10:26:56.207732  365843 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1206 10:26:56.207754  365843 retry.go:31] will retry after 249.762898ms: ssh: handshake failed: EOF
	W1206 10:26:56.494665  365843 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1206 10:26:56.494693  365843 retry.go:31] will retry after 285.993596ms: ssh: handshake failed: EOF
	I1206 10:26:56.586068  365843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1206 10:26:56.760971  365843 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1206 10:26:56.761035  365843 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1206 10:26:56.768421  365843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:26:56.788122  365843 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1206 10:26:56.788147  365843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1206 10:26:56.824696  365843 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1206 10:26:56.824723  365843 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1206 10:26:56.840892  365843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1206 10:26:56.876257  365843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1206 10:26:56.882175  365843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1206 10:26:56.891786  365843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1206 10:26:56.917826  365843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1206 10:26:56.979846  365843 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1206 10:26:56.979918  365843 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1206 10:26:57.012253  365843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1206 10:26:57.019122  365843 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1206 10:26:57.019196  365843 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1206 10:26:57.028169  365843 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1206 10:26:57.028241  365843 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1206 10:26:57.048027  365843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:26:57.112654  365843 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1206 10:26:57.112725  365843 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1206 10:26:57.115104  365843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1206 10:26:57.160437  365843 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 10:26:57.160535  365843 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1206 10:26:57.207345  365843 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1206 10:26:57.207367  365843 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1206 10:26:57.208652  365843 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1206 10:26:57.208668  365843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1206 10:26:57.343696  365843 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1206 10:26:57.343771  365843 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1206 10:26:57.347691  365843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1206 10:26:57.355589  365843 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1206 10:26:57.355614  365843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1206 10:26:57.362689  365843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 10:26:57.416618  365843 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.424433441s)
	I1206 10:26:57.417381  365843 node_ready.go:35] waiting up to 6m0s for node "addons-545880" to be "Ready" ...
	I1206 10:26:57.418126  365843 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.598991437s)
	I1206 10:26:57.418152  365843 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1206 10:26:57.610704  365843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1206 10:26:57.620885  365843 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1206 10:26:57.620918  365843 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1206 10:26:57.702495  365843 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1206 10:26:57.702530  365843 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1206 10:26:57.854193  365843 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 10:26:57.854226  365843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1206 10:26:57.922717  365843 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-545880" context rescaled to 1 replicas
	I1206 10:26:57.957226  365843 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.371118161s)
	I1206 10:26:58.069786  365843 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1206 10:26:58.069829  365843 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1206 10:26:58.117240  365843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 10:26:58.283663  365843 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1206 10:26:58.283745  365843 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1206 10:26:58.478764  365843 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1206 10:26:58.478840  365843 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1206 10:26:58.653255  365843 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1206 10:26:58.653324  365843 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1206 10:26:58.867398  365843 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1206 10:26:58.867467  365843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1206 10:26:58.992932  365843 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1206 10:26:58.992960  365843 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1206 10:26:59.212393  365843 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1206 10:26:59.212419  365843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1206 10:26:59.388709  365843 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1206 10:26:59.388778  365843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	W1206 10:26:59.424927  365843 node_ready.go:57] node "addons-545880" has "Ready":"False" status (will retry)
	I1206 10:26:59.586206  365843 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1206 10:26:59.586277  365843 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1206 10:26:59.859237  365843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1206 10:27:00.010501  365843 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.242038625s)
	I1206 10:27:00.010640  365843 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.169722597s)
	I1206 10:27:00.651414  365843 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.775032579s)
	I1206 10:27:00.932952  365843 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.050700954s)
	I1206 10:27:00.933043  365843 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.041185196s)
	I1206 10:27:00.933101  365843 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.015203792s)
	W1206 10:27:01.434232  365843 node_ready.go:57] node "addons-545880" has "Ready":"False" status (will retry)
	I1206 10:27:01.922166  365843 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.909822613s)
	I1206 10:27:01.922204  365843 addons.go:495] Verifying addon ingress=true in "addons-545880"
	I1206 10:27:01.922354  365843 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.87425312s)
	I1206 10:27:01.922611  365843 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.807442642s)
	I1206 10:27:01.922705  365843 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.574935078s)
	I1206 10:27:01.922964  365843 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.560201575s)
	I1206 10:27:01.922985  365843 addons.go:495] Verifying addon metrics-server=true in "addons-545880"
	I1206 10:27:01.923039  365843 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.312288854s)
	I1206 10:27:01.923090  365843 addons.go:495] Verifying addon registry=true in "addons-545880"
	I1206 10:27:01.923342  365843 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.806045083s)
	W1206 10:27:01.923537  365843 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1206 10:27:01.923565  365843 retry.go:31] will retry after 251.579721ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1206 10:27:01.925812  365843 out.go:179] * Verifying ingress addon...
	I1206 10:27:01.927764  365843 out.go:179] * Verifying registry addon...
	I1206 10:27:01.927788  365843 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-545880 service yakd-dashboard -n yakd-dashboard
	
	I1206 10:27:01.930607  365843 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1206 10:27:01.930794  365843 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1206 10:27:01.941322  365843 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1206 10:27:01.941348  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:01.941497  365843 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1206 10:27:01.941512  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:02.176003  365843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 10:27:02.238251  365843 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.378903705s)
	I1206 10:27:02.238289  365843 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-545880"
	I1206 10:27:02.241401  365843 out.go:179] * Verifying csi-hostpath-driver addon...
	I1206 10:27:02.245069  365843 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1206 10:27:02.279736  365843 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1206 10:27:02.279755  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:02.435652  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:02.436011  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:02.748984  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:02.934270  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:02.934467  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:03.249032  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:03.359230  365843 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1206 10:27:03.359330  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:27:03.377198  365843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:27:03.436239  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:03.436359  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:03.502044  365843 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1206 10:27:03.516804  365843 addons.go:239] Setting addon gcp-auth=true in "addons-545880"
	I1206 10:27:03.516854  365843 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:27:03.517333  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:27:03.534900  365843 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1206 10:27:03.534980  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:27:03.553572  365843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:27:03.748216  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1206 10:27:03.921118  365843 node_ready.go:57] node "addons-545880" has "Ready":"False" status (will retry)
	I1206 10:27:03.934485  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:03.934630  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:04.248943  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:04.434274  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:04.434371  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:04.749397  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:04.897146  365843 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.721089362s)
	I1206 10:27:04.897209  365843 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.362289931s)
	I1206 10:27:04.900002  365843 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1206 10:27:04.903019  365843 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1206 10:27:04.905888  365843 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1206 10:27:04.905908  365843 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1206 10:27:04.920462  365843 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1206 10:27:04.920484  365843 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1206 10:27:04.936908  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:04.937112  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:04.939689  365843 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1206 10:27:04.939710  365843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1206 10:27:04.955835  365843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1206 10:27:05.249072  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:05.452558  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:05.456610  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:05.473920  365843 addons.go:495] Verifying addon gcp-auth=true in "addons-545880"
	I1206 10:27:05.476924  365843 out.go:179] * Verifying gcp-auth addon...
	I1206 10:27:05.480596  365843 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1206 10:27:05.489823  365843 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1206 10:27:05.489849  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:05.748323  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:05.934950  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:05.935273  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:05.984076  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:06.247979  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1206 10:27:06.421093  365843 node_ready.go:57] node "addons-545880" has "Ready":"False" status (will retry)
	I1206 10:27:06.434943  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:06.435353  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:06.483932  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:06.747726  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:06.933708  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:06.933901  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:06.983851  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:07.249228  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:07.434597  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:07.434841  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:07.483900  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:07.748033  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:07.933570  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:07.933764  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:07.983926  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:08.248934  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:08.434143  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:08.434664  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:08.484758  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:08.748943  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1206 10:27:08.921015  365843 node_ready.go:57] node "addons-545880" has "Ready":"False" status (will retry)
	I1206 10:27:08.934375  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:08.934792  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:08.983903  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:09.248357  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:09.434456  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:09.434763  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:09.483490  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:09.748568  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:09.933444  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:09.934544  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:09.984552  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:10.248353  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:10.434619  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:10.434841  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:10.483784  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:10.748601  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:10.934499  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:10.934625  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:10.984857  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:11.248408  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1206 10:27:11.420004  365843 node_ready.go:57] node "addons-545880" has "Ready":"False" status (will retry)
	I1206 10:27:11.434083  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:11.434303  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:11.484141  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:11.748333  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:11.934686  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:11.934807  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:11.983730  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:12.248655  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:12.434845  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:12.435069  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:12.483877  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:12.749100  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:12.934061  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:12.934522  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:12.984323  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:13.248812  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1206 10:27:13.421621  365843 node_ready.go:57] node "addons-545880" has "Ready":"False" status (will retry)
	I1206 10:27:13.434686  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:13.434822  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:13.483746  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:13.748720  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:13.934837  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:13.934946  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:13.983730  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:14.249064  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:14.434883  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:14.435042  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:14.483771  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:14.749339  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:14.934552  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:14.935126  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:14.983990  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:15.249381  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:15.434396  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:15.434735  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:15.484428  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:15.748304  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1206 10:27:15.920164  365843 node_ready.go:57] node "addons-545880" has "Ready":"False" status (will retry)
	I1206 10:27:15.934182  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:15.934541  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:15.984351  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:16.248464  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:16.435014  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:16.435198  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:16.484236  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:16.748268  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:16.934554  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:16.934686  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:16.984560  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:17.249182  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:17.434572  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:17.434763  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:17.483589  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:17.748529  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1206 10:27:17.922103  365843 node_ready.go:57] node "addons-545880" has "Ready":"False" status (will retry)
	I1206 10:27:17.934434  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:17.934692  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:17.983497  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:18.248255  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:18.434739  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:18.434934  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:18.483628  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:18.748605  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:18.935027  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:18.935225  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:18.984249  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:19.248576  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:19.434616  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:19.434773  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:19.483837  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:19.749063  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:19.934496  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:19.934837  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:19.983729  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:20.248985  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1206 10:27:20.420941  365843 node_ready.go:57] node "addons-545880" has "Ready":"False" status (will retry)
	I1206 10:27:20.433748  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:20.433957  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:20.483844  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:20.748769  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:20.934139  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:20.934470  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:20.984674  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:21.249837  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:21.434958  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:21.435173  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:21.484191  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:21.748093  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:21.934652  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:21.934838  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:21.993184  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:22.248416  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:22.433933  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:22.434338  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:22.484284  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:22.748485  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1206 10:27:22.920588  365843 node_ready.go:57] node "addons-545880" has "Ready":"False" status (will retry)
	I1206 10:27:22.934808  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:22.934907  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:22.983779  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:23.249578  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:23.434486  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:23.434787  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:23.484427  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:23.748705  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:23.934509  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:23.934710  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:23.983840  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:24.249195  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:24.434014  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:24.434076  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:24.483924  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:24.748074  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1206 10:27:24.921208  365843 node_ready.go:57] node "addons-545880" has "Ready":"False" status (will retry)
	I1206 10:27:24.934530  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:24.934547  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:24.984107  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:25.249046  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:25.434828  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:25.435075  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:25.483756  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:25.748745  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:25.935136  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:25.935196  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:25.984181  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:26.248296  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:26.433786  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:26.434138  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:26.484234  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:26.748152  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:26.934697  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:26.935011  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:26.983483  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:27.248385  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1206 10:27:27.420523  365843 node_ready.go:57] node "addons-545880" has "Ready":"False" status (will retry)
	I1206 10:27:27.434651  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:27.435148  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:27.483995  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:27.748307  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:27.934598  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:27.934832  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:27.984124  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:28.248313  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:28.435114  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:28.435269  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:28.484342  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:28.748896  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:28.935141  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:28.935290  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:28.984512  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:29.248878  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1206 10:27:29.420957  365843 node_ready.go:57] node "addons-545880" has "Ready":"False" status (will retry)
	I1206 10:27:29.433991  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:29.434390  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:29.484378  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:29.748373  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:29.934838  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:29.935206  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:29.983939  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:30.249005  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:30.434004  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:30.434385  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:30.484311  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:30.748221  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:30.934227  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:30.934812  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:30.983889  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:31.249021  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:31.434660  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:31.434833  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:31.483788  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:31.748843  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1206 10:27:31.921279  365843 node_ready.go:57] node "addons-545880" has "Ready":"False" status (will retry)
	I1206 10:27:31.934423  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:31.934865  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:31.983795  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:32.248995  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:32.434137  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:32.434252  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:32.483937  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:32.749029  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:32.934174  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:32.934844  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:32.984056  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:33.248603  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:33.435077  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:33.435454  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:33.484283  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:33.748250  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:33.934235  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:33.934614  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:33.983519  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:34.248760  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1206 10:27:34.420601  365843 node_ready.go:57] node "addons-545880" has "Ready":"False" status (will retry)
	I1206 10:27:34.434781  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:34.435321  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:34.484053  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:34.748195  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:34.934004  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:34.934146  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:34.984176  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:35.248291  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:35.434258  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:35.434406  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:35.484087  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:35.748216  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:35.934664  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:35.935078  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:35.984144  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:36.248172  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1206 10:27:36.421451  365843 node_ready.go:57] node "addons-545880" has "Ready":"False" status (will retry)
	I1206 10:27:36.434555  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:36.434798  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:36.483758  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:36.756741  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:36.932669  365843 node_ready.go:49] node "addons-545880" is "Ready"
	I1206 10:27:36.932708  365843 node_ready.go:38] duration metric: took 39.515291987s for node "addons-545880" to be "Ready" ...
	I1206 10:27:36.932739  365843 api_server.go:52] waiting for apiserver process to appear ...
	I1206 10:27:36.932824  365843 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:27:36.948827  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:36.949421  365843 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1206 10:27:36.949443  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:36.950323  365843 api_server.go:72] duration metric: took 41.55469851s to wait for apiserver process to appear ...
	I1206 10:27:36.950350  365843 api_server.go:88] waiting for apiserver healthz status ...
	I1206 10:27:36.950376  365843 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1206 10:27:36.959790  365843 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1206 10:27:36.966282  365843 api_server.go:141] control plane version: v1.34.2
	I1206 10:27:36.966316  365843 api_server.go:131] duration metric: took 15.950523ms to wait for apiserver health ...
	I1206 10:27:36.966326  365843 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 10:27:36.979075  365843 system_pods.go:59] 19 kube-system pods found
	I1206 10:27:36.979119  365843 system_pods.go:61] "coredns-66bc5c9577-mf79v" [9fb9121c-9008-465a-bb08-cf1bfb566a86] Pending
	I1206 10:27:36.979126  365843 system_pods.go:61] "csi-hostpath-attacher-0" [c0a59cbc-3f64-4410-8abf-b27d4ffe5eca] Pending
	I1206 10:27:36.979130  365843 system_pods.go:61] "csi-hostpath-resizer-0" [0c654e88-775b-4aa4-a773-413240f38d73] Pending
	I1206 10:27:36.979169  365843 system_pods.go:61] "csi-hostpathplugin-t892t" [e990c5a0-fb3d-4475-b96a-847521762ef2] Pending
	I1206 10:27:36.979195  365843 system_pods.go:61] "etcd-addons-545880" [54accd8f-75ed-493f-adf4-e8b3bdfc9ee7] Running
	I1206 10:27:36.979206  365843 system_pods.go:61] "kindnet-fmxlt" [1cdbf4b9-d4ed-42da-be4b-c7eb54a81d3a] Running
	I1206 10:27:36.979212  365843 system_pods.go:61] "kube-apiserver-addons-545880" [a9526e71-012f-47ce-a364-b6415e72b9d4] Running
	I1206 10:27:36.979216  365843 system_pods.go:61] "kube-controller-manager-addons-545880" [2b34ecb8-3436-46c0-b985-8fb08a699157] Running
	I1206 10:27:36.979227  365843 system_pods.go:61] "kube-ingress-dns-minikube" [5b393164-cddd-4102-8b6e-d024ce6bcb4c] Pending
	I1206 10:27:36.979231  365843 system_pods.go:61] "kube-proxy-9k5w7" [7b4d65bc-d662-4175-864c-c3d1e6e69e31] Running
	I1206 10:27:36.979235  365843 system_pods.go:61] "kube-scheduler-addons-545880" [9a2c1865-c2cd-4843-a2fb-029ad83e8827] Running
	I1206 10:27:36.979238  365843 system_pods.go:61] "metrics-server-85b7d694d7-6j9l7" [c7953423-aab8-4805-bc6f-57aac150e43a] Pending
	I1206 10:27:36.979242  365843 system_pods.go:61] "nvidia-device-plugin-daemonset-6sbmv" [255b0abe-864c-4ff8-9125-e4ffb052005c] Pending
	I1206 10:27:36.979252  365843 system_pods.go:61] "registry-6b586f9694-lrjzv" [ccdcbcbc-0689-4862-8dd1-415689504519] Pending
	I1206 10:27:36.979274  365843 system_pods.go:61] "registry-creds-764b6fb674-nrw5g" [db673d4c-19b0-4f94-bf04-a6cbd90211bf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 10:27:36.979285  365843 system_pods.go:61] "registry-proxy-j4zp9" [7cd8d746-23f2-448e-a9a6-8281f58e0d70] Pending
	I1206 10:27:36.979290  365843 system_pods.go:61] "snapshot-controller-7d9fbc56b8-4trlb" [316c179e-ac90-449c-9f3e-ffcb1016f2ff] Pending
	I1206 10:27:36.979295  365843 system_pods.go:61] "snapshot-controller-7d9fbc56b8-ncslf" [bb9699cb-e616-4ee5-a31e-886ed8794ce4] Pending
	I1206 10:27:36.979312  365843 system_pods.go:61] "storage-provisioner" [5f75f24f-7f45-4a01-a603-961b4f05ca09] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 10:27:36.979326  365843 system_pods.go:74] duration metric: took 12.9924ms to wait for pod list to return data ...
	I1206 10:27:36.979350  365843 default_sa.go:34] waiting for default service account to be created ...
	I1206 10:27:36.984332  365843 default_sa.go:45] found service account: "default"
	I1206 10:27:36.984370  365843 default_sa.go:55] duration metric: took 5.014042ms for default service account to be created ...
	I1206 10:27:36.984381  365843 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 10:27:37.004677  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:37.005945  365843 system_pods.go:86] 19 kube-system pods found
	I1206 10:27:37.005992  365843 system_pods.go:89] "coredns-66bc5c9577-mf79v" [9fb9121c-9008-465a-bb08-cf1bfb566a86] Pending
	I1206 10:27:37.006003  365843 system_pods.go:89] "csi-hostpath-attacher-0" [c0a59cbc-3f64-4410-8abf-b27d4ffe5eca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 10:27:37.006009  365843 system_pods.go:89] "csi-hostpath-resizer-0" [0c654e88-775b-4aa4-a773-413240f38d73] Pending
	I1206 10:27:37.006015  365843 system_pods.go:89] "csi-hostpathplugin-t892t" [e990c5a0-fb3d-4475-b96a-847521762ef2] Pending
	I1206 10:27:37.006018  365843 system_pods.go:89] "etcd-addons-545880" [54accd8f-75ed-493f-adf4-e8b3bdfc9ee7] Running
	I1206 10:27:37.008531  365843 system_pods.go:89] "kindnet-fmxlt" [1cdbf4b9-d4ed-42da-be4b-c7eb54a81d3a] Running
	I1206 10:27:37.008593  365843 system_pods.go:89] "kube-apiserver-addons-545880" [a9526e71-012f-47ce-a364-b6415e72b9d4] Running
	I1206 10:27:37.008600  365843 system_pods.go:89] "kube-controller-manager-addons-545880" [2b34ecb8-3436-46c0-b985-8fb08a699157] Running
	I1206 10:27:37.008608  365843 system_pods.go:89] "kube-ingress-dns-minikube" [5b393164-cddd-4102-8b6e-d024ce6bcb4c] Pending
	I1206 10:27:37.008612  365843 system_pods.go:89] "kube-proxy-9k5w7" [7b4d65bc-d662-4175-864c-c3d1e6e69e31] Running
	I1206 10:27:37.008617  365843 system_pods.go:89] "kube-scheduler-addons-545880" [9a2c1865-c2cd-4843-a2fb-029ad83e8827] Running
	I1206 10:27:37.008622  365843 system_pods.go:89] "metrics-server-85b7d694d7-6j9l7" [c7953423-aab8-4805-bc6f-57aac150e43a] Pending
	I1206 10:27:37.008655  365843 system_pods.go:89] "nvidia-device-plugin-daemonset-6sbmv" [255b0abe-864c-4ff8-9125-e4ffb052005c] Pending
	I1206 10:27:37.008669  365843 system_pods.go:89] "registry-6b586f9694-lrjzv" [ccdcbcbc-0689-4862-8dd1-415689504519] Pending
	I1206 10:27:37.008679  365843 system_pods.go:89] "registry-creds-764b6fb674-nrw5g" [db673d4c-19b0-4f94-bf04-a6cbd90211bf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 10:27:37.008689  365843 system_pods.go:89] "registry-proxy-j4zp9" [7cd8d746-23f2-448e-a9a6-8281f58e0d70] Pending
	I1206 10:27:37.008698  365843 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4trlb" [316c179e-ac90-449c-9f3e-ffcb1016f2ff] Pending
	I1206 10:27:37.008722  365843 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ncslf" [bb9699cb-e616-4ee5-a31e-886ed8794ce4] Pending
	I1206 10:27:37.008738  365843 system_pods.go:89] "storage-provisioner" [5f75f24f-7f45-4a01-a603-961b4f05ca09] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 10:27:37.008775  365843 retry.go:31] will retry after 277.323708ms: missing components: kube-dns
	I1206 10:27:37.253451  365843 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1206 10:27:37.253503  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:37.294581  365843 system_pods.go:86] 19 kube-system pods found
	I1206 10:27:37.294637  365843 system_pods.go:89] "coredns-66bc5c9577-mf79v" [9fb9121c-9008-465a-bb08-cf1bfb566a86] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:27:37.294665  365843 system_pods.go:89] "csi-hostpath-attacher-0" [c0a59cbc-3f64-4410-8abf-b27d4ffe5eca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 10:27:37.294688  365843 system_pods.go:89] "csi-hostpath-resizer-0" [0c654e88-775b-4aa4-a773-413240f38d73] Pending
	I1206 10:27:37.294694  365843 system_pods.go:89] "csi-hostpathplugin-t892t" [e990c5a0-fb3d-4475-b96a-847521762ef2] Pending
	I1206 10:27:37.294699  365843 system_pods.go:89] "etcd-addons-545880" [54accd8f-75ed-493f-adf4-e8b3bdfc9ee7] Running
	I1206 10:27:37.294704  365843 system_pods.go:89] "kindnet-fmxlt" [1cdbf4b9-d4ed-42da-be4b-c7eb54a81d3a] Running
	I1206 10:27:37.294715  365843 system_pods.go:89] "kube-apiserver-addons-545880" [a9526e71-012f-47ce-a364-b6415e72b9d4] Running
	I1206 10:27:37.294720  365843 system_pods.go:89] "kube-controller-manager-addons-545880" [2b34ecb8-3436-46c0-b985-8fb08a699157] Running
	I1206 10:27:37.294725  365843 system_pods.go:89] "kube-ingress-dns-minikube" [5b393164-cddd-4102-8b6e-d024ce6bcb4c] Pending
	I1206 10:27:37.294747  365843 system_pods.go:89] "kube-proxy-9k5w7" [7b4d65bc-d662-4175-864c-c3d1e6e69e31] Running
	I1206 10:27:37.294759  365843 system_pods.go:89] "kube-scheduler-addons-545880" [9a2c1865-c2cd-4843-a2fb-029ad83e8827] Running
	I1206 10:27:37.294763  365843 system_pods.go:89] "metrics-server-85b7d694d7-6j9l7" [c7953423-aab8-4805-bc6f-57aac150e43a] Pending
	I1206 10:27:37.294767  365843 system_pods.go:89] "nvidia-device-plugin-daemonset-6sbmv" [255b0abe-864c-4ff8-9125-e4ffb052005c] Pending
	I1206 10:27:37.294783  365843 system_pods.go:89] "registry-6b586f9694-lrjzv" [ccdcbcbc-0689-4862-8dd1-415689504519] Pending
	I1206 10:27:37.294797  365843 system_pods.go:89] "registry-creds-764b6fb674-nrw5g" [db673d4c-19b0-4f94-bf04-a6cbd90211bf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 10:27:37.294803  365843 system_pods.go:89] "registry-proxy-j4zp9" [7cd8d746-23f2-448e-a9a6-8281f58e0d70] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 10:27:37.294833  365843 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4trlb" [316c179e-ac90-449c-9f3e-ffcb1016f2ff] Pending
	I1206 10:27:37.294846  365843 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ncslf" [bb9699cb-e616-4ee5-a31e-886ed8794ce4] Pending
	I1206 10:27:37.294851  365843 system_pods.go:89] "storage-provisioner" [5f75f24f-7f45-4a01-a603-961b4f05ca09] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 10:27:37.294885  365843 retry.go:31] will retry after 262.575521ms: missing components: kube-dns
	I1206 10:27:37.446980  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:37.452621  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:37.496589  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:37.564517  365843 system_pods.go:86] 19 kube-system pods found
	I1206 10:27:37.564553  365843 system_pods.go:89] "coredns-66bc5c9577-mf79v" [9fb9121c-9008-465a-bb08-cf1bfb566a86] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:27:37.564570  365843 system_pods.go:89] "csi-hostpath-attacher-0" [c0a59cbc-3f64-4410-8abf-b27d4ffe5eca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 10:27:37.564595  365843 system_pods.go:89] "csi-hostpath-resizer-0" [0c654e88-775b-4aa4-a773-413240f38d73] Pending
	I1206 10:27:37.564609  365843 system_pods.go:89] "csi-hostpathplugin-t892t" [e990c5a0-fb3d-4475-b96a-847521762ef2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1206 10:27:37.564613  365843 system_pods.go:89] "etcd-addons-545880" [54accd8f-75ed-493f-adf4-e8b3bdfc9ee7] Running
	I1206 10:27:37.564619  365843 system_pods.go:89] "kindnet-fmxlt" [1cdbf4b9-d4ed-42da-be4b-c7eb54a81d3a] Running
	I1206 10:27:37.564629  365843 system_pods.go:89] "kube-apiserver-addons-545880" [a9526e71-012f-47ce-a364-b6415e72b9d4] Running
	I1206 10:27:37.564647  365843 system_pods.go:89] "kube-controller-manager-addons-545880" [2b34ecb8-3436-46c0-b985-8fb08a699157] Running
	I1206 10:27:37.564663  365843 system_pods.go:89] "kube-ingress-dns-minikube" [5b393164-cddd-4102-8b6e-d024ce6bcb4c] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 10:27:37.564667  365843 system_pods.go:89] "kube-proxy-9k5w7" [7b4d65bc-d662-4175-864c-c3d1e6e69e31] Running
	I1206 10:27:37.564685  365843 system_pods.go:89] "kube-scheduler-addons-545880" [9a2c1865-c2cd-4843-a2fb-029ad83e8827] Running
	I1206 10:27:37.564699  365843 system_pods.go:89] "metrics-server-85b7d694d7-6j9l7" [c7953423-aab8-4805-bc6f-57aac150e43a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 10:27:37.564707  365843 system_pods.go:89] "nvidia-device-plugin-daemonset-6sbmv" [255b0abe-864c-4ff8-9125-e4ffb052005c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1206 10:27:37.564732  365843 system_pods.go:89] "registry-6b586f9694-lrjzv" [ccdcbcbc-0689-4862-8dd1-415689504519] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 10:27:37.564744  365843 system_pods.go:89] "registry-creds-764b6fb674-nrw5g" [db673d4c-19b0-4f94-bf04-a6cbd90211bf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 10:27:37.564752  365843 system_pods.go:89] "registry-proxy-j4zp9" [7cd8d746-23f2-448e-a9a6-8281f58e0d70] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 10:27:37.564762  365843 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4trlb" [316c179e-ac90-449c-9f3e-ffcb1016f2ff] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 10:27:37.564769  365843 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ncslf" [bb9699cb-e616-4ee5-a31e-886ed8794ce4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 10:27:37.564782  365843 system_pods.go:89] "storage-provisioner" [5f75f24f-7f45-4a01-a603-961b4f05ca09] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 10:27:37.564812  365843 retry.go:31] will retry after 428.098269ms: missing components: kube-dns
	I1206 10:27:37.759619  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:37.939010  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:37.939156  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:38.040572  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:38.041546  365843 system_pods.go:86] 19 kube-system pods found
	I1206 10:27:38.041580  365843 system_pods.go:89] "coredns-66bc5c9577-mf79v" [9fb9121c-9008-465a-bb08-cf1bfb566a86] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:27:38.041590  365843 system_pods.go:89] "csi-hostpath-attacher-0" [c0a59cbc-3f64-4410-8abf-b27d4ffe5eca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 10:27:38.041636  365843 system_pods.go:89] "csi-hostpath-resizer-0" [0c654e88-775b-4aa4-a773-413240f38d73] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1206 10:27:38.041645  365843 system_pods.go:89] "csi-hostpathplugin-t892t" [e990c5a0-fb3d-4475-b96a-847521762ef2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1206 10:27:38.041657  365843 system_pods.go:89] "etcd-addons-545880" [54accd8f-75ed-493f-adf4-e8b3bdfc9ee7] Running
	I1206 10:27:38.041689  365843 system_pods.go:89] "kindnet-fmxlt" [1cdbf4b9-d4ed-42da-be4b-c7eb54a81d3a] Running
	I1206 10:27:38.041701  365843 system_pods.go:89] "kube-apiserver-addons-545880" [a9526e71-012f-47ce-a364-b6415e72b9d4] Running
	I1206 10:27:38.041707  365843 system_pods.go:89] "kube-controller-manager-addons-545880" [2b34ecb8-3436-46c0-b985-8fb08a699157] Running
	I1206 10:27:38.041720  365843 system_pods.go:89] "kube-ingress-dns-minikube" [5b393164-cddd-4102-8b6e-d024ce6bcb4c] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 10:27:38.041731  365843 system_pods.go:89] "kube-proxy-9k5w7" [7b4d65bc-d662-4175-864c-c3d1e6e69e31] Running
	I1206 10:27:38.041738  365843 system_pods.go:89] "kube-scheduler-addons-545880" [9a2c1865-c2cd-4843-a2fb-029ad83e8827] Running
	I1206 10:27:38.041766  365843 system_pods.go:89] "metrics-server-85b7d694d7-6j9l7" [c7953423-aab8-4805-bc6f-57aac150e43a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 10:27:38.041782  365843 system_pods.go:89] "nvidia-device-plugin-daemonset-6sbmv" [255b0abe-864c-4ff8-9125-e4ffb052005c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1206 10:27:38.041794  365843 system_pods.go:89] "registry-6b586f9694-lrjzv" [ccdcbcbc-0689-4862-8dd1-415689504519] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 10:27:38.041807  365843 system_pods.go:89] "registry-creds-764b6fb674-nrw5g" [db673d4c-19b0-4f94-bf04-a6cbd90211bf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 10:27:38.041818  365843 system_pods.go:89] "registry-proxy-j4zp9" [7cd8d746-23f2-448e-a9a6-8281f58e0d70] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 10:27:38.041825  365843 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4trlb" [316c179e-ac90-449c-9f3e-ffcb1016f2ff] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 10:27:38.041865  365843 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ncslf" [bb9699cb-e616-4ee5-a31e-886ed8794ce4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 10:27:38.041883  365843 system_pods.go:89] "storage-provisioner" [5f75f24f-7f45-4a01-a603-961b4f05ca09] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 10:27:38.041908  365843 retry.go:31] will retry after 474.893963ms: missing components: kube-dns
	I1206 10:27:38.248288  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:38.435171  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:38.435305  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:38.484039  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:38.520834  365843 system_pods.go:86] 19 kube-system pods found
	I1206 10:27:38.520877  365843 system_pods.go:89] "coredns-66bc5c9577-mf79v" [9fb9121c-9008-465a-bb08-cf1bfb566a86] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:27:38.520886  365843 system_pods.go:89] "csi-hostpath-attacher-0" [c0a59cbc-3f64-4410-8abf-b27d4ffe5eca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 10:27:38.520894  365843 system_pods.go:89] "csi-hostpath-resizer-0" [0c654e88-775b-4aa4-a773-413240f38d73] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1206 10:27:38.520941  365843 system_pods.go:89] "csi-hostpathplugin-t892t" [e990c5a0-fb3d-4475-b96a-847521762ef2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1206 10:27:38.520956  365843 system_pods.go:89] "etcd-addons-545880" [54accd8f-75ed-493f-adf4-e8b3bdfc9ee7] Running
	I1206 10:27:38.520963  365843 system_pods.go:89] "kindnet-fmxlt" [1cdbf4b9-d4ed-42da-be4b-c7eb54a81d3a] Running
	I1206 10:27:38.520968  365843 system_pods.go:89] "kube-apiserver-addons-545880" [a9526e71-012f-47ce-a364-b6415e72b9d4] Running
	I1206 10:27:38.520976  365843 system_pods.go:89] "kube-controller-manager-addons-545880" [2b34ecb8-3436-46c0-b985-8fb08a699157] Running
	I1206 10:27:38.520983  365843 system_pods.go:89] "kube-ingress-dns-minikube" [5b393164-cddd-4102-8b6e-d024ce6bcb4c] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 10:27:38.520990  365843 system_pods.go:89] "kube-proxy-9k5w7" [7b4d65bc-d662-4175-864c-c3d1e6e69e31] Running
	I1206 10:27:38.521016  365843 system_pods.go:89] "kube-scheduler-addons-545880" [9a2c1865-c2cd-4843-a2fb-029ad83e8827] Running
	I1206 10:27:38.521022  365843 system_pods.go:89] "metrics-server-85b7d694d7-6j9l7" [c7953423-aab8-4805-bc6f-57aac150e43a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 10:27:38.521035  365843 system_pods.go:89] "nvidia-device-plugin-daemonset-6sbmv" [255b0abe-864c-4ff8-9125-e4ffb052005c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1206 10:27:38.521042  365843 system_pods.go:89] "registry-6b586f9694-lrjzv" [ccdcbcbc-0689-4862-8dd1-415689504519] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 10:27:38.521051  365843 system_pods.go:89] "registry-creds-764b6fb674-nrw5g" [db673d4c-19b0-4f94-bf04-a6cbd90211bf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 10:27:38.521062  365843 system_pods.go:89] "registry-proxy-j4zp9" [7cd8d746-23f2-448e-a9a6-8281f58e0d70] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 10:27:38.521084  365843 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4trlb" [316c179e-ac90-449c-9f3e-ffcb1016f2ff] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 10:27:38.521101  365843 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ncslf" [bb9699cb-e616-4ee5-a31e-886ed8794ce4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 10:27:38.521109  365843 system_pods.go:89] "storage-provisioner" [5f75f24f-7f45-4a01-a603-961b4f05ca09] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 10:27:38.521129  365843 retry.go:31] will retry after 540.417916ms: missing components: kube-dns
	I1206 10:27:38.748507  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:38.935615  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:38.936021  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:38.984334  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:39.066417  365843 system_pods.go:86] 19 kube-system pods found
	I1206 10:27:39.066453  365843 system_pods.go:89] "coredns-66bc5c9577-mf79v" [9fb9121c-9008-465a-bb08-cf1bfb566a86] Running
	I1206 10:27:39.066464  365843 system_pods.go:89] "csi-hostpath-attacher-0" [c0a59cbc-3f64-4410-8abf-b27d4ffe5eca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 10:27:39.066471  365843 system_pods.go:89] "csi-hostpath-resizer-0" [0c654e88-775b-4aa4-a773-413240f38d73] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1206 10:27:39.066527  365843 system_pods.go:89] "csi-hostpathplugin-t892t" [e990c5a0-fb3d-4475-b96a-847521762ef2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1206 10:27:39.066533  365843 system_pods.go:89] "etcd-addons-545880" [54accd8f-75ed-493f-adf4-e8b3bdfc9ee7] Running
	I1206 10:27:39.066538  365843 system_pods.go:89] "kindnet-fmxlt" [1cdbf4b9-d4ed-42da-be4b-c7eb54a81d3a] Running
	I1206 10:27:39.066548  365843 system_pods.go:89] "kube-apiserver-addons-545880" [a9526e71-012f-47ce-a364-b6415e72b9d4] Running
	I1206 10:27:39.066552  365843 system_pods.go:89] "kube-controller-manager-addons-545880" [2b34ecb8-3436-46c0-b985-8fb08a699157] Running
	I1206 10:27:39.066567  365843 system_pods.go:89] "kube-ingress-dns-minikube" [5b393164-cddd-4102-8b6e-d024ce6bcb4c] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 10:27:39.066573  365843 system_pods.go:89] "kube-proxy-9k5w7" [7b4d65bc-d662-4175-864c-c3d1e6e69e31] Running
	I1206 10:27:39.066582  365843 system_pods.go:89] "kube-scheduler-addons-545880" [9a2c1865-c2cd-4843-a2fb-029ad83e8827] Running
	I1206 10:27:39.066588  365843 system_pods.go:89] "metrics-server-85b7d694d7-6j9l7" [c7953423-aab8-4805-bc6f-57aac150e43a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 10:27:39.066595  365843 system_pods.go:89] "nvidia-device-plugin-daemonset-6sbmv" [255b0abe-864c-4ff8-9125-e4ffb052005c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1206 10:27:39.066607  365843 system_pods.go:89] "registry-6b586f9694-lrjzv" [ccdcbcbc-0689-4862-8dd1-415689504519] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 10:27:39.066613  365843 system_pods.go:89] "registry-creds-764b6fb674-nrw5g" [db673d4c-19b0-4f94-bf04-a6cbd90211bf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 10:27:39.066623  365843 system_pods.go:89] "registry-proxy-j4zp9" [7cd8d746-23f2-448e-a9a6-8281f58e0d70] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 10:27:39.066630  365843 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4trlb" [316c179e-ac90-449c-9f3e-ffcb1016f2ff] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 10:27:39.066640  365843 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ncslf" [bb9699cb-e616-4ee5-a31e-886ed8794ce4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 10:27:39.066644  365843 system_pods.go:89] "storage-provisioner" [5f75f24f-7f45-4a01-a603-961b4f05ca09] Running
	I1206 10:27:39.066656  365843 system_pods.go:126] duration metric: took 2.082268071s to wait for k8s-apps to be running ...
	I1206 10:27:39.066669  365843 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 10:27:39.066729  365843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:27:39.081542  365843 system_svc.go:56] duration metric: took 14.863596ms WaitForService to wait for kubelet
	I1206 10:27:39.081575  365843 kubeadm.go:587] duration metric: took 43.685951854s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 10:27:39.081594  365843 node_conditions.go:102] verifying NodePressure condition ...
	I1206 10:27:39.084602  365843 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1206 10:27:39.084634  365843 node_conditions.go:123] node cpu capacity is 2
	I1206 10:27:39.084664  365843 node_conditions.go:105] duration metric: took 3.063624ms to run NodePressure ...
	I1206 10:27:39.084678  365843 start.go:242] waiting for startup goroutines ...
	I1206 10:27:39.249455  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:39.434414  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:39.434617  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:39.483675  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:39.749650  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:39.935787  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:39.935958  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:39.984107  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:40.248590  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:40.434379  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:40.434597  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:40.483640  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:40.749363  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:40.934661  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:40.934820  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:40.983756  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:41.249267  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:41.436521  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:41.436987  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:41.484537  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:41.748793  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:41.935743  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:41.936216  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:41.984724  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:42.249532  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:42.435239  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:42.435341  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:42.484069  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:42.749234  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:42.934576  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:42.934768  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:42.983593  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:43.248700  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:43.435987  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:43.436499  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:43.484377  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:43.749109  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:43.935590  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:43.935864  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:43.984064  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:44.249161  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:44.435074  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:44.435362  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:44.484045  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:44.749200  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:44.935347  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:44.935542  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:44.984613  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:45.251589  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:45.434458  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:45.436464  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:45.484145  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:45.749451  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:45.942275  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:45.943153  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:46.055056  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:46.248971  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:46.434268  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:46.434463  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:46.484410  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:46.749091  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:46.935895  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:46.936081  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:46.984138  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:47.250189  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:47.437085  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:47.438327  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:47.484731  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:47.749835  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:47.936272  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:47.936625  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:47.983876  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:48.249940  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:48.438464  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:48.439189  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:48.484107  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:48.748413  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:48.934505  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:48.934754  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:48.984914  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:49.249687  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:49.434722  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:49.435144  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:49.484328  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:49.749215  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:49.936831  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:49.937268  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:49.984535  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:50.249861  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:50.433909  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:50.433912  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:50.483364  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:50.749136  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:50.934732  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:50.934939  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:50.984308  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:51.249183  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:51.434799  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:51.435682  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:51.484430  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:51.749403  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:51.936389  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:51.937370  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:51.986737  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:52.249779  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:52.436769  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:52.437548  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:52.484102  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:52.749762  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:52.935702  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:52.936085  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:52.984321  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:53.249040  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:53.435866  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:53.436362  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:53.484650  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:53.750880  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:53.935302  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:53.935459  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:53.986291  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:54.248513  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:54.435274  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:54.435499  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:54.483724  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:54.749051  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:54.935581  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:54.935852  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:54.984111  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:55.248434  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:55.434675  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:55.434834  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:55.483673  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:55.749918  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:55.935001  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:55.934774  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:55.984504  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:56.250374  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:56.435243  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:56.435368  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:56.484468  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:56.749862  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:56.935250  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:56.935357  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:56.984951  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:57.248696  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:57.434662  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:57.435414  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:57.484288  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:57.752047  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:57.935526  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:57.935978  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:57.984597  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:58.249336  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:58.436684  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:58.436982  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:58.485017  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:58.765057  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:58.936119  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:58.936437  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:58.985224  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:59.249233  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:59.435603  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:59.435805  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:59.483954  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:59.749290  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:59.937453  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:59.938083  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:00.053141  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:00.272777  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:00.501456  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:00.538103  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:00.539150  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:00.748300  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:00.936088  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:00.936325  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:00.984718  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:01.249419  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:01.435863  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:01.435997  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:01.484117  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:01.749299  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:01.935914  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:01.936077  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:01.983944  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:02.250285  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:02.436739  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:02.437141  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:02.484610  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:02.749793  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:02.935538  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:02.935980  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:02.984558  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:03.249759  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:03.434893  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:03.435199  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:03.484193  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:03.749672  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:03.935517  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:03.935825  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:03.984153  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:04.249299  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:04.436659  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:04.436788  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:04.484437  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:04.749536  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:04.934801  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:04.935046  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:04.983614  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:05.249701  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:05.435338  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:05.440437  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:05.484992  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:05.750354  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:05.935567  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:05.935715  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:05.983723  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:06.252797  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:06.435584  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:06.435745  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:06.483731  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:06.762161  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:06.935066  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:06.935272  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:06.986290  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:07.248965  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:07.434684  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:07.434884  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:07.486133  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:07.748781  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:07.938923  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:07.939489  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:07.996088  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:08.248903  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:08.434153  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:08.434514  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:08.484250  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:08.748826  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:08.935444  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:08.936586  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:08.984063  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:09.250335  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:09.436349  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:09.436808  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:09.484175  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:09.749321  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:09.936925  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:09.937346  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:09.984563  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:10.249587  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:10.436132  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:10.436364  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:10.484159  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:10.748491  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:10.935939  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:10.936237  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:10.983809  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:11.248595  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:11.435116  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:11.435277  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:11.484155  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:11.748619  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:11.934997  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:11.935368  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:11.984291  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:12.249101  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:12.435680  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:12.435730  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:12.483853  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:12.749164  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:12.936301  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:12.936755  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:12.984295  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:13.251526  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:13.438777  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:13.439204  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:13.484028  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:13.748035  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:13.935608  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:13.935793  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:13.983943  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:14.250930  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:14.437038  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:14.437263  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:14.484540  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:14.751008  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:14.941395  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:14.949286  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:14.984368  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:15.249253  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:15.434842  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:15.435271  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:15.483947  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:15.749647  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:15.938281  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:15.940832  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:16.040179  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:16.251926  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:16.437381  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:16.438854  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:16.484930  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:16.748949  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:16.935772  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:16.935948  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:16.984613  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:17.248848  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:17.435132  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:17.435479  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:17.484563  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:17.749120  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:17.935168  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:17.935353  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:17.983996  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:18.248758  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:18.434219  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:18.434396  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:18.484383  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:18.749280  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:18.936548  365843 kapi.go:107] duration metric: took 1m17.005741109s to wait for kubernetes.io/minikube-addons=registry ...
	I1206 10:28:18.936840  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:18.984173  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:19.249056  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:19.436492  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:19.484214  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:19.752746  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:19.934934  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:19.984273  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:20.249800  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:20.434437  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:20.485164  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:20.749060  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:20.935694  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:20.986927  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:21.249986  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:21.434374  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:21.484466  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:21.749294  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:21.934677  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:21.984928  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:22.249490  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:22.436784  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:22.536678  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:22.751450  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:22.934081  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:22.984050  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:23.249330  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:23.444522  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:23.485064  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:23.749118  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:23.934582  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:23.985992  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:24.250121  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:24.434137  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:24.484121  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:24.750123  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:24.956832  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:24.983926  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:25.254072  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:25.435977  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:25.484323  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:25.749268  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:25.937797  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:25.984839  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:26.249078  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:26.434559  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:26.484689  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:26.749353  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:26.933867  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:26.984173  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:27.248957  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:27.434298  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:27.487409  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:27.749096  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:27.934098  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:27.984138  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:28.248838  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:28.434045  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:28.483794  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:28.748521  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:28.934229  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:28.984415  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:29.249259  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:29.437327  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:29.485317  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:29.754914  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:29.944286  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:30.056981  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:30.257761  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:30.434615  365843 kapi.go:107] duration metric: took 1m28.504005454s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1206 10:28:30.485776  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:30.749169  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:30.984428  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:31.249492  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:31.484850  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:31.748787  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:31.984406  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:32.249681  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:32.485003  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:32.749270  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:32.984408  365843 kapi.go:107] duration metric: took 1m27.503812031s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1206 10:28:32.987655  365843 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-545880 cluster.
	I1206 10:28:32.990933  365843 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1206 10:28:32.993987  365843 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1206 10:28:33.249021  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:33.749164  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:34.248668  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:34.749644  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:35.249655  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:35.748374  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:36.250277  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:36.749375  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:37.249829  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:37.748586  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:38.249402  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:38.749686  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:39.248556  365843 kapi.go:107] duration metric: took 1m37.003490424s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1206 10:28:39.251914  365843 out.go:179] * Enabled addons: nvidia-device-plugin, storage-provisioner, amd-gpu-device-plugin, storage-provisioner-rancher, inspektor-gadget, registry-creds, cloud-spanner, ingress-dns, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1206 10:28:39.254774  365843 addons.go:530] duration metric: took 1m43.858897793s for enable addons: enabled=[nvidia-device-plugin storage-provisioner amd-gpu-device-plugin storage-provisioner-rancher inspektor-gadget registry-creds cloud-spanner ingress-dns metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1206 10:28:39.254842  365843 start.go:247] waiting for cluster config update ...
	I1206 10:28:39.254867  365843 start.go:256] writing updated cluster config ...
	I1206 10:28:39.255177  365843 ssh_runner.go:195] Run: rm -f paused
	I1206 10:28:39.259881  365843 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 10:28:39.263446  365843 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mf79v" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:28:39.269116  365843 pod_ready.go:94] pod "coredns-66bc5c9577-mf79v" is "Ready"
	I1206 10:28:39.269148  365843 pod_ready.go:86] duration metric: took 5.674593ms for pod "coredns-66bc5c9577-mf79v" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:28:39.271536  365843 pod_ready.go:83] waiting for pod "etcd-addons-545880" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:28:39.276140  365843 pod_ready.go:94] pod "etcd-addons-545880" is "Ready"
	I1206 10:28:39.276165  365843 pod_ready.go:86] duration metric: took 4.594698ms for pod "etcd-addons-545880" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:28:39.278745  365843 pod_ready.go:83] waiting for pod "kube-apiserver-addons-545880" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:28:39.283788  365843 pod_ready.go:94] pod "kube-apiserver-addons-545880" is "Ready"
	I1206 10:28:39.283817  365843 pod_ready.go:86] duration metric: took 5.048471ms for pod "kube-apiserver-addons-545880" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:28:39.286371  365843 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-545880" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:28:39.664488  365843 pod_ready.go:94] pod "kube-controller-manager-addons-545880" is "Ready"
	I1206 10:28:39.664517  365843 pod_ready.go:86] duration metric: took 378.12241ms for pod "kube-controller-manager-addons-545880" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:28:39.864365  365843 pod_ready.go:83] waiting for pod "kube-proxy-9k5w7" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:28:40.263696  365843 pod_ready.go:94] pod "kube-proxy-9k5w7" is "Ready"
	I1206 10:28:40.263808  365843 pod_ready.go:86] duration metric: took 399.402768ms for pod "kube-proxy-9k5w7" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:28:40.465379  365843 pod_ready.go:83] waiting for pod "kube-scheduler-addons-545880" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:28:40.863559  365843 pod_ready.go:94] pod "kube-scheduler-addons-545880" is "Ready"
	I1206 10:28:40.863588  365843 pod_ready.go:86] duration metric: took 397.930534ms for pod "kube-scheduler-addons-545880" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:28:40.863602  365843 pod_ready.go:40] duration metric: took 1.603688491s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 10:28:40.927696  365843 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1206 10:28:40.931346  365843 out.go:179] * Done! kubectl is now configured to use "addons-545880" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 06 10:31:28 addons-545880 crio[828]: time="2025-12-06T10:31:28.786007795Z" level=info msg="Removed container 55905c08d6f459a7b90f62394a6702f0215c416af29568fd4489d03e762a7b68: kube-system/registry-creds-764b6fb674-nrw5g/registry-creds" id=9f2de1f3-4b71-41dc-b9b0-e56a60086325 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 10:31:41 addons-545880 crio[828]: time="2025-12-06T10:31:41.709998207Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-pf8cn/POD" id=f4e60216-c31b-4227-833f-330f3335b0b0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 10:31:41 addons-545880 crio[828]: time="2025-12-06T10:31:41.710060854Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 10:31:41 addons-545880 crio[828]: time="2025-12-06T10:31:41.723479572Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-pf8cn Namespace:default ID:c391489746372eaab00b4067ee3d37c27d5e1221d896122463f0c719da430dee UID:b1f8e8f2-c073-4103-92d2-b1cf0fd1685e NetNS:/var/run/netns/d1eaa959-cd21-43fe-b08c-4fa3ba814425 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001ebf298}] Aliases:map[]}"
	Dec 06 10:31:41 addons-545880 crio[828]: time="2025-12-06T10:31:41.72365527Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-pf8cn to CNI network \"kindnet\" (type=ptp)"
	Dec 06 10:31:41 addons-545880 crio[828]: time="2025-12-06T10:31:41.73931157Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-pf8cn Namespace:default ID:c391489746372eaab00b4067ee3d37c27d5e1221d896122463f0c719da430dee UID:b1f8e8f2-c073-4103-92d2-b1cf0fd1685e NetNS:/var/run/netns/d1eaa959-cd21-43fe-b08c-4fa3ba814425 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001ebf298}] Aliases:map[]}"
	Dec 06 10:31:41 addons-545880 crio[828]: time="2025-12-06T10:31:41.739828629Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-pf8cn for CNI network kindnet (type=ptp)"
	Dec 06 10:31:41 addons-545880 crio[828]: time="2025-12-06T10:31:41.744911545Z" level=info msg="Ran pod sandbox c391489746372eaab00b4067ee3d37c27d5e1221d896122463f0c719da430dee with infra container: default/hello-world-app-5d498dc89-pf8cn/POD" id=f4e60216-c31b-4227-833f-330f3335b0b0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 06 10:31:41 addons-545880 crio[828]: time="2025-12-06T10:31:41.74620155Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=3fd4d088-4152-4f27-81ae-bc37b72ab77d name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:31:41 addons-545880 crio[828]: time="2025-12-06T10:31:41.746683187Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=3fd4d088-4152-4f27-81ae-bc37b72ab77d name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:31:41 addons-545880 crio[828]: time="2025-12-06T10:31:41.746813141Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=3fd4d088-4152-4f27-81ae-bc37b72ab77d name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:31:41 addons-545880 crio[828]: time="2025-12-06T10:31:41.750208592Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=35736a43-7bc6-45c7-bc6f-dab8697e2487 name=/runtime.v1.ImageService/PullImage
	Dec 06 10:31:41 addons-545880 crio[828]: time="2025-12-06T10:31:41.753889625Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 06 10:31:42 addons-545880 crio[828]: time="2025-12-06T10:31:42.464071312Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=35736a43-7bc6-45c7-bc6f-dab8697e2487 name=/runtime.v1.ImageService/PullImage
	Dec 06 10:31:42 addons-545880 crio[828]: time="2025-12-06T10:31:42.46496604Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=b590cfff-58db-4190-8296-31c4639e236a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:31:42 addons-545880 crio[828]: time="2025-12-06T10:31:42.46694858Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=9e15d2cf-2c9e-44a1-9d98-31ce028da8a4 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:31:42 addons-545880 crio[828]: time="2025-12-06T10:31:42.474676992Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-pf8cn/hello-world-app" id=f96b61e1-501d-4a8f-8bb5-6e91af68f41f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 10:31:42 addons-545880 crio[828]: time="2025-12-06T10:31:42.474814191Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 10:31:42 addons-545880 crio[828]: time="2025-12-06T10:31:42.483720738Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 10:31:42 addons-545880 crio[828]: time="2025-12-06T10:31:42.48396874Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/f25c925c02e6c1aa6a3703268186963e9bc8663bede953bf93d1cb74f4b21c15/merged/etc/passwd: no such file or directory"
	Dec 06 10:31:42 addons-545880 crio[828]: time="2025-12-06T10:31:42.484007124Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/f25c925c02e6c1aa6a3703268186963e9bc8663bede953bf93d1cb74f4b21c15/merged/etc/group: no such file or directory"
	Dec 06 10:31:42 addons-545880 crio[828]: time="2025-12-06T10:31:42.486229396Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 10:31:42 addons-545880 crio[828]: time="2025-12-06T10:31:42.513396344Z" level=info msg="Created container 2c5e6e245242fa08e0ae97c2c5fdcfbbba0cc3e805f1638d926cea2f6a0f3dd8: default/hello-world-app-5d498dc89-pf8cn/hello-world-app" id=f96b61e1-501d-4a8f-8bb5-6e91af68f41f name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 10:31:42 addons-545880 crio[828]: time="2025-12-06T10:31:42.514498491Z" level=info msg="Starting container: 2c5e6e245242fa08e0ae97c2c5fdcfbbba0cc3e805f1638d926cea2f6a0f3dd8" id=4e8a2852-2809-4e50-8c3c-a13d71e0e52b name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 10:31:42 addons-545880 crio[828]: time="2025-12-06T10:31:42.517371107Z" level=info msg="Started container" PID=7148 containerID=2c5e6e245242fa08e0ae97c2c5fdcfbbba0cc3e805f1638d926cea2f6a0f3dd8 description=default/hello-world-app-5d498dc89-pf8cn/hello-world-app id=4e8a2852-2809-4e50-8c3c-a13d71e0e52b name=/runtime.v1.RuntimeService/StartContainer sandboxID=c391489746372eaab00b4067ee3d37c27d5e1221d896122463f0c719da430dee
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	2c5e6e245242f       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   c391489746372       hello-world-app-5d498dc89-pf8cn            default
	febc6c0748906       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             14 seconds ago           Exited              registry-creds                           4                   49af8ee1b7cef       registry-creds-764b6fb674-nrw5g            kube-system
	bcd981f7bd6ba       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90                                              2 minutes ago            Running             nginx                                    0                   34482c0e4f581       nginx                                      default
	f3ca8ee2f4c77       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          2 minutes ago            Running             busybox                                  0                   517a7fbc44d33       busybox                                    default
	76e109752916e       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   465e98299084f       csi-hostpathplugin-t892t                   kube-system
	c3f6082a7a0c7       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   465e98299084f       csi-hostpathplugin-t892t                   kube-system
	d59b55d8c46bd       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   465e98299084f       csi-hostpathplugin-t892t                   kube-system
	ee27785571f14       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   465e98299084f       csi-hostpathplugin-t892t                   kube-system
	8839ed393577d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   5c27e30d73602       gcp-auth-78565c9fb4-r2csd                  gcp-auth
	fd515714eb80c       registry.k8s.io/ingress-nginx/controller@sha256:655333e68deab34ee3701f400c4d5d9709000cdfdadb802e4bd7500b027e1259                             3 minutes ago            Running             controller                               0                   c2b9aaf3dd1c6       ingress-nginx-controller-6c8bf45fb-g849k   ingress-nginx
	d44b8d7344fa0       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:fadc7bf59b69965b6707edb68022bed4f55a1f99b15f7acd272793e48f171496                            3 minutes ago            Running             gadget                                   0                   92fa074a37e9b       gadget-hv92d                               gadget
	b58456cd2cfa5       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   465e98299084f       csi-hostpathplugin-t892t                   kube-system
	77b77d1ecb28a       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   d3da23cc7380d       registry-proxy-j4zp9                       kube-system
	0bb771e3965c7       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago            Running             csi-resizer                              0                   1ef5e905399af       csi-hostpath-resizer-0                     kube-system
	82475061c7165       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        3 minutes ago            Running             metrics-server                           0                   5408982dabead       metrics-server-85b7d694d7-6j9l7            kube-system
	52d954765a231       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           3 minutes ago            Running             registry                                 0                   0d6dab2fc1c77       registry-6b586f9694-lrjzv                  kube-system
	e292596ad2f80       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   f1846b10c5ad0       snapshot-controller-7d9fbc56b8-4trlb       kube-system
	5c8f0b1a09ff1       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             3 minutes ago            Running             local-path-provisioner                   0                   bdebb1b5ca15f       local-path-provisioner-648f6765c9-gk2vf    local-path-storage
	ab9b79c2c68c1       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago            Running             minikube-ingress-dns                     0                   3858fd11cabc3       kube-ingress-dns-minikube                  kube-system
	56ab0813eec7a       32daba64b064c571f27dbd4e285969f47f8e5dd6c692279b48622e941b4d137f                                                                             3 minutes ago            Exited              patch                                    2                   b29638b638993       ingress-nginx-admission-patch-pb2fq        ingress-nginx
	85307815e9595       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   3 minutes ago            Exited              create                                   0                   8684d8ea30d3e       ingress-nginx-admission-create-f28bm       ingress-nginx
	eaaabe40faa63       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   465e98299084f       csi-hostpathplugin-t892t                   kube-system
	f05b7f270b36a       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               3 minutes ago            Running             cloud-spanner-emulator                   0                   6ccd0e009043d       cloud-spanner-emulator-5bdddb765-c9nvk     default
	aea66f3787491       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   0bef96d69b1e8       nvidia-device-plugin-daemonset-6sbmv       kube-system
	bbd2d73693ff1       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   085dda5e55ccf       snapshot-controller-7d9fbc56b8-ncslf       kube-system
	e656eea2f31f9       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              4 minutes ago            Running             yakd                                     0                   e6f8f3215b678       yakd-dashboard-5ff678cb9-gcfw2             yakd-dashboard
	778b08b9b628c       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             4 minutes ago            Running             csi-attacher                             0                   ec0f8396a1814       csi-hostpath-attacher-0                    kube-system
	358be0ebbc23e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   fc4d9c73e214b       coredns-66bc5c9577-mf79v                   kube-system
	1c178782b46ae       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   f1add1a4b94ab       storage-provisioner                        kube-system
	4f2a86e87c1bf       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                                                             4 minutes ago            Running             kube-proxy                               0                   079d34e17a383       kube-proxy-9k5w7                           kube-system
	410f38934f188       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             4 minutes ago            Running             kindnet-cni                              0                   5d97ef012cd86       kindnet-fmxlt                              kube-system
	69ffc3958d44b       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                                                             4 minutes ago            Running             kube-scheduler                           0                   aaa0d430bb667       kube-scheduler-addons-545880               kube-system
	66618904a8d73       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                                                             4 minutes ago            Running             kube-controller-manager                  0                   0469ea34da844       kube-controller-manager-addons-545880      kube-system
	9717574a82552       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                                                             4 minutes ago            Running             etcd                                     0                   f64b27320e32b       etcd-addons-545880                         kube-system
	6bfecd83e062d       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                                                             4 minutes ago            Running             kube-apiserver                           0                   83b51357fa75a       kube-apiserver-addons-545880               kube-system
	
	
	==> coredns [358be0ebbc23e420f6fde28e811fd30f1d4064a0e72dfb910c4e719a8d628d3b] <==
	[INFO] 10.244.0.17:45328 - 35889 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002223166s
	[INFO] 10.244.0.17:45328 - 49176 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000152255s
	[INFO] 10.244.0.17:45328 - 8067 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000077121s
	[INFO] 10.244.0.17:44653 - 46363 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000228523s
	[INFO] 10.244.0.17:44653 - 46133 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000082676s
	[INFO] 10.244.0.17:38277 - 45845 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00009971s
	[INFO] 10.244.0.17:38277 - 45640 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000234891s
	[INFO] 10.244.0.17:51825 - 7575 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000112395s
	[INFO] 10.244.0.17:51825 - 7395 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000107275s
	[INFO] 10.244.0.17:34245 - 25 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001333591s
	[INFO] 10.244.0.17:34245 - 454 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001441916s
	[INFO] 10.244.0.17:50363 - 8960 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000133154s
	[INFO] 10.244.0.17:50363 - 8821 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000253631s
	[INFO] 10.244.0.21:54344 - 16901 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000157187s
	[INFO] 10.244.0.21:56225 - 26260 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000112378s
	[INFO] 10.244.0.21:40270 - 49635 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000090914s
	[INFO] 10.244.0.21:57368 - 56946 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000095197s
	[INFO] 10.244.0.21:58809 - 54875 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000099316s
	[INFO] 10.244.0.21:51237 - 50010 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000071172s
	[INFO] 10.244.0.21:45504 - 44889 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001893227s
	[INFO] 10.244.0.21:44692 - 29443 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001696007s
	[INFO] 10.244.0.21:42279 - 29185 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000579632s
	[INFO] 10.244.0.21:57705 - 26673 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.001400562s
	[INFO] 10.244.0.23:37094 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000217068s
	[INFO] 10.244.0.23:51895 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000193577s
	
	
	==> describe nodes <==
	Name:               addons-545880
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-545880
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=addons-545880
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T10_26_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-545880
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-545880"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 10:26:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-545880
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 10:31:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 10:31:37 +0000   Sat, 06 Dec 2025 10:26:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 10:31:37 +0000   Sat, 06 Dec 2025 10:26:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 10:31:37 +0000   Sat, 06 Dec 2025 10:26:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 10:31:37 +0000   Sat, 06 Dec 2025 10:27:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-545880
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 276ce0203b90767726fe164c6931608e
	  System UUID:                a99aad51-7303-4ca2-bd24-4fd3bb983487
	  Boot ID:                    b73b980d-8d6b-40e0-82fa-5c1b47c1eef7
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	  default                     cloud-spanner-emulator-5bdddb765-c9nvk      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m45s
	  default                     hello-world-app-5d498dc89-pf8cn             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  gadget                      gadget-hv92d                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m43s
	  gcp-auth                    gcp-auth-78565c9fb4-r2csd                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-g849k    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m42s
	  kube-system                 coredns-66bc5c9577-mf79v                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m48s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 csi-hostpathplugin-t892t                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 etcd-addons-545880                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         4m53s
	  kube-system                 kindnet-fmxlt                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m49s
	  kube-system                 kube-apiserver-addons-545880                250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 kube-controller-manager-addons-545880       200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m43s
	  kube-system                 kube-proxy-9k5w7                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 kube-scheduler-addons-545880                100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 metrics-server-85b7d694d7-6j9l7             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m43s
	  kube-system                 nvidia-device-plugin-daemonset-6sbmv        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 registry-6b586f9694-lrjzv                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m43s
	  kube-system                 registry-creds-764b6fb674-nrw5g             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m45s
	  kube-system                 registry-proxy-j4zp9                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 snapshot-controller-7d9fbc56b8-4trlb        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 snapshot-controller-7d9fbc56b8-ncslf        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m44s
	  local-path-storage          local-path-provisioner-648f6765c9-gk2vf     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m43s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-gcfw2              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age              From             Message
	  ----     ------                   ----             ----             -------
	  Normal   Starting                 4m46s            kube-proxy       
	  Normal   NodeHasSufficientMemory  5m (x8 over 5m)  kubelet          Node addons-545880 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m (x8 over 5m)  kubelet          Node addons-545880 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m (x8 over 5m)  kubelet          Node addons-545880 status is now: NodeHasSufficientPID
	  Normal   Starting                 4m53s            kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m53s            kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m53s            kubelet          Node addons-545880 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m53s            kubelet          Node addons-545880 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m53s            kubelet          Node addons-545880 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m49s            node-controller  Node addons-545880 event: Registered Node addons-545880 in Controller
	  Normal   NodeReady                4m7s             kubelet          Node addons-545880 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 6 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014752] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503231] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.065820] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.901896] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.944603] kauditd_printk_skb: 39 callbacks suppressed
	[Dec 6 09:04] hrtimer: interrupt took 32230230 ns
	[Dec 6 10:25] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 6 10:26] overlayfs: idmapped layers are currently not supported
	[  +0.066821] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [9717574a8255200f8dddcf7a2550e63bdb6b4bb664ec25aeb8635f9277183f01] <==
	{"level":"warn","ts":"2025-12-06T10:26:45.972153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:26:45.986921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:26:46.002120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:26:46.052607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:26:46.073844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:26:46.088418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:26:46.115959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:26:46.131509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:26:46.151590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:26:46.184625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:26:46.215467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:26:46.263516Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:26:46.283702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:26:46.320574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:26:46.331871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:26:46.362691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:26:46.377088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:26:46.393600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:26:46.491195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:27:02.508252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:27:02.522562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:27:24.271499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:27:24.301768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:27:24.345868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:27:24.372638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54438","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [8839ed393577da519560ec647e7e40f8a85575cec116939c4863405e2813374a] <==
	2025/12/06 10:28:32 GCP Auth Webhook started!
	2025/12/06 10:28:41 Ready to marshal response ...
	2025/12/06 10:28:41 Ready to write response ...
	2025/12/06 10:28:41 Ready to marshal response ...
	2025/12/06 10:28:41 Ready to write response ...
	2025/12/06 10:28:41 Ready to marshal response ...
	2025/12/06 10:28:41 Ready to write response ...
	2025/12/06 10:29:03 Ready to marshal response ...
	2025/12/06 10:29:03 Ready to write response ...
	2025/12/06 10:29:10 Ready to marshal response ...
	2025/12/06 10:29:10 Ready to write response ...
	2025/12/06 10:29:22 Ready to marshal response ...
	2025/12/06 10:29:22 Ready to write response ...
	2025/12/06 10:29:27 Ready to marshal response ...
	2025/12/06 10:29:27 Ready to write response ...
	2025/12/06 10:29:49 Ready to marshal response ...
	2025/12/06 10:29:49 Ready to write response ...
	2025/12/06 10:29:50 Ready to marshal response ...
	2025/12/06 10:29:50 Ready to write response ...
	2025/12/06 10:29:58 Ready to marshal response ...
	2025/12/06 10:29:58 Ready to write response ...
	2025/12/06 10:31:41 Ready to marshal response ...
	2025/12/06 10:31:41 Ready to write response ...
	
	
	==> kernel <==
	 10:31:43 up  2:14,  0 user,  load average: 1.55, 2.18, 1.96
	Linux addons-545880 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [410f38934f188529387872c7a0345e42f47f3295f320a1765aa24e1b9a271d4d] <==
	I1206 10:29:36.460440       1 main.go:301] handling current node
	I1206 10:29:46.461702       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 10:29:46.461941       1 main.go:301] handling current node
	I1206 10:29:56.459818       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 10:29:56.459942       1 main.go:301] handling current node
	I1206 10:30:06.460345       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 10:30:06.460379       1 main.go:301] handling current node
	I1206 10:30:16.461572       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 10:30:16.461606       1 main.go:301] handling current node
	I1206 10:30:26.459565       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 10:30:26.459604       1 main.go:301] handling current node
	I1206 10:30:36.467134       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 10:30:36.467171       1 main.go:301] handling current node
	I1206 10:30:46.460752       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 10:30:46.460803       1 main.go:301] handling current node
	I1206 10:30:56.466664       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 10:30:56.466771       1 main.go:301] handling current node
	I1206 10:31:06.467409       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 10:31:06.467521       1 main.go:301] handling current node
	I1206 10:31:16.467465       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 10:31:16.467502       1 main.go:301] handling current node
	I1206 10:31:26.465692       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 10:31:26.465725       1 main.go:301] handling current node
	I1206 10:31:36.466062       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 10:31:36.466098       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6bfecd83e062db176f5124191f88157b58c2a91ba34d40d6c82c9fbd3c6fee47] <==
	I1206 10:28:01.611064       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1206 10:28:01.612204       1 handler_proxy.go:99] no RequestInfo found in the context
	E1206 10:28:01.612281       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1206 10:28:01.612291       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1206 10:28:24.914394       1 handler_proxy.go:99] no RequestInfo found in the context
	E1206 10:28:24.914473       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1206 10:28:24.915106       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.255.148:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.255.148:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.255.148:443: connect: connection refused" logger="UnhandledError"
	E1206 10:28:24.916610       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.255.148:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.255.148:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.255.148:443: connect: connection refused" logger="UnhandledError"
	E1206 10:28:24.924682       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.255.148:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.255.148:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.255.148:443: connect: connection refused" logger="UnhandledError"
	E1206 10:28:24.946563       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.255.148:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.255.148:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.255.148:443: connect: connection refused" logger="UnhandledError"
	I1206 10:28:25.141745       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1206 10:28:52.012497       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:49636: use of closed network connection
	E1206 10:28:52.384445       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:49684: use of closed network connection
	I1206 10:29:21.574717       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1206 10:29:21.898237       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1206 10:29:22.193212       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.108.22.203"}
	E1206 10:29:23.234722       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	E1206 10:29:36.582281       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1206 10:31:41.271124       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.201.24"}
	
	
	==> kube-controller-manager [66618904a8d73226678429bf63c1faac7f76d45b9de953c282d294fedfc2cfb6] <==
	I1206 10:26:54.295088       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1206 10:26:54.295142       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1206 10:26:54.296606       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1206 10:26:54.296886       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1206 10:26:54.300981       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1206 10:26:54.304407       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1206 10:26:54.304916       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1206 10:26:54.307399       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1206 10:26:54.321190       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 10:26:54.321216       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1206 10:26:54.321231       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1206 10:26:54.344104       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	E1206 10:27:00.742925       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1206 10:27:24.259525       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1206 10:27:24.259676       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1206 10:27:24.259739       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1206 10:27:24.308875       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1206 10:27:24.329800       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1206 10:27:24.360388       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 10:27:24.430348       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 10:27:39.249915       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1206 10:27:54.365739       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1206 10:27:54.442301       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1206 10:28:24.370197       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1206 10:28:24.460281       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [4f2a86e87c1bf385e11b164e78ea4f4e9844b0534c9bec2d841dfb406fec8a56] <==
	I1206 10:26:56.376938       1 server_linux.go:53] "Using iptables proxy"
	I1206 10:26:56.514056       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 10:26:56.614713       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 10:26:56.615249       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1206 10:26:56.615371       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 10:26:56.667473       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 10:26:56.667837       1 server_linux.go:132] "Using iptables Proxier"
	I1206 10:26:56.676194       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 10:26:56.676514       1 server.go:527] "Version info" version="v1.34.2"
	I1206 10:26:56.676531       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 10:26:56.677922       1 config.go:200] "Starting service config controller"
	I1206 10:26:56.677932       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 10:26:56.677948       1 config.go:106] "Starting endpoint slice config controller"
	I1206 10:26:56.677952       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 10:26:56.677968       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 10:26:56.677972       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 10:26:56.678578       1 config.go:309] "Starting node config controller"
	I1206 10:26:56.678586       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 10:26:56.678592       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 10:26:56.778927       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 10:26:56.778961       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 10:26:56.779000       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [69ffc3958d44bd262b1360fdb7c52481a97a7e588cd4d05224b3704341139dd0] <==
	E1206 10:26:47.508706       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1206 10:26:47.508852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 10:26:47.508975       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1206 10:26:47.509082       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 10:26:47.509195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 10:26:47.509296       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 10:26:47.509400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 10:26:47.509499       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 10:26:47.509604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 10:26:47.509726       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 10:26:47.509877       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 10:26:47.510000       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1206 10:26:47.510102       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1206 10:26:47.510281       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 10:26:47.510460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 10:26:48.341133       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 10:26:48.397111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1206 10:26:48.450311       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 10:26:48.490818       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 10:26:48.517246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1206 10:26:48.523629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1206 10:26:48.632507       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1206 10:26:48.723483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 10:26:48.837152       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1206 10:26:51.795314       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 06 10:31:01 addons-545880 kubelet[1285]: E1206 10:31:01.331638    1285 cadvisor_stats_provider.go:567] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/crio/crio-18858f12bf4e91b070b46717f489357faa665d45fe3471e74d412a52a2bb6525\": RecentStats: unable to find data in memory cache]"
	Dec 06 10:31:03 addons-545880 kubelet[1285]: I1206 10:31:03.211275    1285 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-lrjzv" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 10:31:05 addons-545880 kubelet[1285]: I1206 10:31:05.210805    1285 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-6sbmv" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 10:31:11 addons-545880 kubelet[1285]: E1206 10:31:11.375647    1285 cadvisor_stats_provider.go:567] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/crio/crio-18858f12bf4e91b070b46717f489357faa665d45fe3471e74d412a52a2bb6525\": RecentStats: unable to find data in memory cache]"
	Dec 06 10:31:15 addons-545880 kubelet[1285]: E1206 10:31:15.055755    1285 cadvisor_stats_provider.go:567] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/crio/crio-18858f12bf4e91b070b46717f489357faa665d45fe3471e74d412a52a2bb6525\": RecentStats: unable to find data in memory cache]"
	Dec 06 10:31:15 addons-545880 kubelet[1285]: I1206 10:31:15.211782    1285 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-nrw5g" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 10:31:15 addons-545880 kubelet[1285]: I1206 10:31:15.212012    1285 scope.go:117] "RemoveContainer" containerID="55905c08d6f459a7b90f62394a6702f0215c416af29568fd4489d03e762a7b68"
	Dec 06 10:31:15 addons-545880 kubelet[1285]: E1206 10:31:15.212206    1285 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 40s restarting failed container=registry-creds pod=registry-creds-764b6fb674-nrw5g_kube-system(db673d4c-19b0-4f94-bf04-a6cbd90211bf)\"" pod="kube-system/registry-creds-764b6fb674-nrw5g" podUID="db673d4c-19b0-4f94-bf04-a6cbd90211bf"
	Dec 06 10:31:21 addons-545880 kubelet[1285]: E1206 10:31:21.420111    1285 cadvisor_stats_provider.go:567] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/crio/crio-18858f12bf4e91b070b46717f489357faa665d45fe3471e74d412a52a2bb6525\": RecentStats: unable to find data in memory cache]"
	Dec 06 10:31:28 addons-545880 kubelet[1285]: I1206 10:31:28.214190    1285 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-nrw5g" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 10:31:28 addons-545880 kubelet[1285]: I1206 10:31:28.214270    1285 scope.go:117] "RemoveContainer" containerID="55905c08d6f459a7b90f62394a6702f0215c416af29568fd4489d03e762a7b68"
	Dec 06 10:31:28 addons-545880 kubelet[1285]: I1206 10:31:28.759326    1285 scope.go:117] "RemoveContainer" containerID="55905c08d6f459a7b90f62394a6702f0215c416af29568fd4489d03e762a7b68"
	Dec 06 10:31:28 addons-545880 kubelet[1285]: I1206 10:31:28.759647    1285 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-nrw5g" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 10:31:28 addons-545880 kubelet[1285]: I1206 10:31:28.759710    1285 scope.go:117] "RemoveContainer" containerID="febc6c074890644d490ffd46898ae58bfc0dc97a00710bfcfe5d2324aab25c05"
	Dec 06 10:31:28 addons-545880 kubelet[1285]: E1206 10:31:28.759875    1285 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=registry-creds pod=registry-creds-764b6fb674-nrw5g_kube-system(db673d4c-19b0-4f94-bf04-a6cbd90211bf)\"" pod="kube-system/registry-creds-764b6fb674-nrw5g" podUID="db673d4c-19b0-4f94-bf04-a6cbd90211bf"
	Dec 06 10:31:31 addons-545880 kubelet[1285]: E1206 10:31:31.463626    1285 cadvisor_stats_provider.go:567] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/crio/crio-18858f12bf4e91b070b46717f489357faa665d45fe3471e74d412a52a2bb6525\": RecentStats: unable to find data in memory cache]"
	Dec 06 10:31:40 addons-545880 kubelet[1285]: I1206 10:31:40.210763    1285 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-j4zp9" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 10:31:41 addons-545880 kubelet[1285]: I1206 10:31:41.221065    1285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b1f8e8f2-c073-4103-92d2-b1cf0fd1685e-gcp-creds\") pod \"hello-world-app-5d498dc89-pf8cn\" (UID: \"b1f8e8f2-c073-4103-92d2-b1cf0fd1685e\") " pod="default/hello-world-app-5d498dc89-pf8cn"
	Dec 06 10:31:41 addons-545880 kubelet[1285]: I1206 10:31:41.221609    1285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8j8x\" (UniqueName: \"kubernetes.io/projected/b1f8e8f2-c073-4103-92d2-b1cf0fd1685e-kube-api-access-p8j8x\") pod \"hello-world-app-5d498dc89-pf8cn\" (UID: \"b1f8e8f2-c073-4103-92d2-b1cf0fd1685e\") " pod="default/hello-world-app-5d498dc89-pf8cn"
	Dec 06 10:31:41 addons-545880 kubelet[1285]: E1206 10:31:41.509870    1285 cadvisor_stats_provider.go:567] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/crio/crio-18858f12bf4e91b070b46717f489357faa665d45fe3471e74d412a52a2bb6525\": RecentStats: unable to find data in memory cache]"
	Dec 06 10:31:41 addons-545880 kubelet[1285]: W1206 10:31:41.742089    1285 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/a93a155d5df0f69f3f6899c99e43c4171f82074157d733e88fc9accf1c14279f/crio-c391489746372eaab00b4067ee3d37c27d5e1221d896122463f0c719da430dee WatchSource:0}: Error finding container c391489746372eaab00b4067ee3d37c27d5e1221d896122463f0c719da430dee: Status 404 returned error can't find the container with id c391489746372eaab00b4067ee3d37c27d5e1221d896122463f0c719da430dee
	Dec 06 10:31:42 addons-545880 kubelet[1285]: I1206 10:31:42.211162    1285 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-nrw5g" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 10:31:42 addons-545880 kubelet[1285]: I1206 10:31:42.211242    1285 scope.go:117] "RemoveContainer" containerID="febc6c074890644d490ffd46898ae58bfc0dc97a00710bfcfe5d2324aab25c05"
	Dec 06 10:31:42 addons-545880 kubelet[1285]: E1206 10:31:42.211746    1285 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=registry-creds pod=registry-creds-764b6fb674-nrw5g_kube-system(db673d4c-19b0-4f94-bf04-a6cbd90211bf)\"" pod="kube-system/registry-creds-764b6fb674-nrw5g" podUID="db673d4c-19b0-4f94-bf04-a6cbd90211bf"
	Dec 06 10:31:42 addons-545880 kubelet[1285]: I1206 10:31:42.847628    1285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-pf8cn" podStartSLOduration=1.128768948 podStartE2EDuration="1.847599276s" podCreationTimestamp="2025-12-06 10:31:41 +0000 UTC" firstStartedPulling="2025-12-06 10:31:41.747100413 +0000 UTC m=+291.756908552" lastFinishedPulling="2025-12-06 10:31:42.465930741 +0000 UTC m=+292.475738880" observedRunningTime="2025-12-06 10:31:42.838391286 +0000 UTC m=+292.848199425" watchObservedRunningTime="2025-12-06 10:31:42.847599276 +0000 UTC m=+292.857407415"
	
	
	==> storage-provisioner [1c178782b46ae3df28453a2dd88fc57e38eb824abae86db11976cc74cf8b87be] <==
	W1206 10:31:19.105997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:31:21.108684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:31:21.113050       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:31:23.116560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:31:23.121244       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:31:25.124413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:31:25.129255       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:31:27.132497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:31:27.137015       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:31:29.141711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:31:29.146254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:31:31.150290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:31:31.157359       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:31:33.160794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:31:33.165296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:31:35.168621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:31:35.173167       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:31:37.176876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:31:37.183741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:31:39.187087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:31:39.192234       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:31:41.215604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:31:41.266178       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:31:43.272679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:31:43.281949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-545880 -n addons-545880
helpers_test.go:269: (dbg) Run:  kubectl --context addons-545880 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-f28bm ingress-nginx-admission-patch-pb2fq
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-545880 describe pod ingress-nginx-admission-create-f28bm ingress-nginx-admission-patch-pb2fq
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-545880 describe pod ingress-nginx-admission-create-f28bm ingress-nginx-admission-patch-pb2fq: exit status 1 (132.653536ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-f28bm" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-pb2fq" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-545880 describe pod ingress-nginx-admission-create-f28bm ingress-nginx-admission-patch-pb2fq: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-545880 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-545880 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (290.651694ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:31:44.544970  375425 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:31:44.545736  375425 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:31:44.545780  375425 out.go:374] Setting ErrFile to fd 2...
	I1206 10:31:44.545804  375425 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:31:44.546232  375425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 10:31:44.546744  375425 mustload.go:66] Loading cluster: addons-545880
	I1206 10:31:44.547496  375425 config.go:182] Loaded profile config "addons-545880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:31:44.547549  375425 addons.go:622] checking whether the cluster is paused
	I1206 10:31:44.547734  375425 config.go:182] Loaded profile config "addons-545880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:31:44.547777  375425 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:31:44.549247  375425 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:31:44.590829  375425 ssh_runner.go:195] Run: systemctl --version
	I1206 10:31:44.590915  375425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:31:44.610013  375425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:31:44.722418  375425 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:31:44.722503  375425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:31:44.752559  375425 cri.go:89] found id: "febc6c074890644d490ffd46898ae58bfc0dc97a00710bfcfe5d2324aab25c05"
	I1206 10:31:44.752586  375425 cri.go:89] found id: "76e109752916eb227cc2778fc40189f2225fe99abbb5caa1dc492604fa63b088"
	I1206 10:31:44.752599  375425 cri.go:89] found id: "c3f6082a7a0c7c8725c19d46cd708aeb5d4126a349db5fe93809b3ef79169052"
	I1206 10:31:44.752603  375425 cri.go:89] found id: "d59b55d8c46bd316322b59f76bbed7bf1ba7ae09f22a8d7446896bb650747b97"
	I1206 10:31:44.752609  375425 cri.go:89] found id: "ee27785571f1406c526ba554d46111adfc871bf6b5094f993b79d922ed4e4e88"
	I1206 10:31:44.752612  375425 cri.go:89] found id: "b58456cd2cfa54ef5616f519f55a6b7b272d08f96ca019bf4d2f47f9dc581de3"
	I1206 10:31:44.752615  375425 cri.go:89] found id: "77b77d1ecb28a6271e776faf9148345a91cf28a8eb40f9adc7343e6d90864f3a"
	I1206 10:31:44.752621  375425 cri.go:89] found id: "0bb771e3965c7313e7a976270ee1cf4f72f901f19cf787e7ef330577f83ca8b0"
	I1206 10:31:44.752624  375425 cri.go:89] found id: "82475061c71650dc2d5ef1c1b6fb59dc1e8d85ff79c3598c514ad231134b1d1a"
	I1206 10:31:44.752632  375425 cri.go:89] found id: "52d954765a231dbdcd394aa043b7231f3b45f20db74ede3718de67caabeea5a3"
	I1206 10:31:44.752636  375425 cri.go:89] found id: "e292596ad2f80045ad3b706145d35d90657c46cc5300b047c28f357a09003684"
	I1206 10:31:44.752639  375425 cri.go:89] found id: "ab9b79c2c68c1be8095a1a81cd7d444d52723042c6629740074d930656007cfd"
	I1206 10:31:44.752643  375425 cri.go:89] found id: "eaaabe40faa63af2c6b5e0ffb01fdbff88ff53227bb4a4b884fca2db86a16b38"
	I1206 10:31:44.752646  375425 cri.go:89] found id: "aea66f37874913be4b5420f3d08acfb0b6388ccfb25c63270ce6741cf675ba44"
	I1206 10:31:44.752649  375425 cri.go:89] found id: "bbd2d73693ff14927141ea51103bb4d99dce673d1531632ca460362ab91bc129"
	I1206 10:31:44.752654  375425 cri.go:89] found id: "778b08b9b628cb82a3c8742868fe4b9a4b0dbad3c250600336afae611d54dcfd"
	I1206 10:31:44.752660  375425 cri.go:89] found id: "358be0ebbc23e420f6fde28e811fd30f1d4064a0e72dfb910c4e719a8d628d3b"
	I1206 10:31:44.752665  375425 cri.go:89] found id: "1c178782b46ae3df28453a2dd88fc57e38eb824abae86db11976cc74cf8b87be"
	I1206 10:31:44.752673  375425 cri.go:89] found id: "4f2a86e87c1bf385e11b164e78ea4f4e9844b0534c9bec2d841dfb406fec8a56"
	I1206 10:31:44.752676  375425 cri.go:89] found id: "410f38934f188529387872c7a0345e42f47f3295f320a1765aa24e1b9a271d4d"
	I1206 10:31:44.752683  375425 cri.go:89] found id: "69ffc3958d44bd262b1360fdb7c52481a97a7e588cd4d05224b3704341139dd0"
	I1206 10:31:44.752690  375425 cri.go:89] found id: "66618904a8d73226678429bf63c1faac7f76d45b9de953c282d294fedfc2cfb6"
	I1206 10:31:44.752693  375425 cri.go:89] found id: "9717574a8255200f8dddcf7a2550e63bdb6b4bb664ec25aeb8635f9277183f01"
	I1206 10:31:44.752696  375425 cri.go:89] found id: "6bfecd83e062db176f5124191f88157b58c2a91ba34d40d6c82c9fbd3c6fee47"
	I1206 10:31:44.752699  375425 cri.go:89] found id: ""
	I1206 10:31:44.752751  375425 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 10:31:44.768247  375425 out.go:203] 
	W1206 10:31:44.771286  375425 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:31:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:31:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 10:31:44.771311  375425 out.go:285] * 
	* 
	W1206 10:31:44.776413  375425 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:31:44.779343  375425 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-545880 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-545880 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-545880 addons disable ingress --alsologtostderr -v=1: exit status 11 (372.649584ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:31:44.839009  375469 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:31:44.839753  375469 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:31:44.839773  375469 out.go:374] Setting ErrFile to fd 2...
	I1206 10:31:44.839779  375469 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:31:44.840061  375469 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 10:31:44.840352  375469 mustload.go:66] Loading cluster: addons-545880
	I1206 10:31:44.840728  375469 config.go:182] Loaded profile config "addons-545880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:31:44.840747  375469 addons.go:622] checking whether the cluster is paused
	I1206 10:31:44.840854  375469 config.go:182] Loaded profile config "addons-545880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:31:44.840868  375469 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:31:44.841381  375469 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:31:44.867754  375469 ssh_runner.go:195] Run: systemctl --version
	I1206 10:31:44.867814  375469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:31:44.886200  375469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:31:45.038308  375469 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:31:45.042104  375469 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:31:45.094117  375469 cri.go:89] found id: "febc6c074890644d490ffd46898ae58bfc0dc97a00710bfcfe5d2324aab25c05"
	I1206 10:31:45.094165  375469 cri.go:89] found id: "76e109752916eb227cc2778fc40189f2225fe99abbb5caa1dc492604fa63b088"
	I1206 10:31:45.094180  375469 cri.go:89] found id: "c3f6082a7a0c7c8725c19d46cd708aeb5d4126a349db5fe93809b3ef79169052"
	I1206 10:31:45.094184  375469 cri.go:89] found id: "d59b55d8c46bd316322b59f76bbed7bf1ba7ae09f22a8d7446896bb650747b97"
	I1206 10:31:45.094188  375469 cri.go:89] found id: "ee27785571f1406c526ba554d46111adfc871bf6b5094f993b79d922ed4e4e88"
	I1206 10:31:45.094192  375469 cri.go:89] found id: "b58456cd2cfa54ef5616f519f55a6b7b272d08f96ca019bf4d2f47f9dc581de3"
	I1206 10:31:45.094196  375469 cri.go:89] found id: "77b77d1ecb28a6271e776faf9148345a91cf28a8eb40f9adc7343e6d90864f3a"
	I1206 10:31:45.094199  375469 cri.go:89] found id: "0bb771e3965c7313e7a976270ee1cf4f72f901f19cf787e7ef330577f83ca8b0"
	I1206 10:31:45.094203  375469 cri.go:89] found id: "82475061c71650dc2d5ef1c1b6fb59dc1e8d85ff79c3598c514ad231134b1d1a"
	I1206 10:31:45.094209  375469 cri.go:89] found id: "52d954765a231dbdcd394aa043b7231f3b45f20db74ede3718de67caabeea5a3"
	I1206 10:31:45.094221  375469 cri.go:89] found id: "e292596ad2f80045ad3b706145d35d90657c46cc5300b047c28f357a09003684"
	I1206 10:31:45.094224  375469 cri.go:89] found id: "ab9b79c2c68c1be8095a1a81cd7d444d52723042c6629740074d930656007cfd"
	I1206 10:31:45.094228  375469 cri.go:89] found id: "eaaabe40faa63af2c6b5e0ffb01fdbff88ff53227bb4a4b884fca2db86a16b38"
	I1206 10:31:45.094231  375469 cri.go:89] found id: "aea66f37874913be4b5420f3d08acfb0b6388ccfb25c63270ce6741cf675ba44"
	I1206 10:31:45.094234  375469 cri.go:89] found id: "bbd2d73693ff14927141ea51103bb4d99dce673d1531632ca460362ab91bc129"
	I1206 10:31:45.094254  375469 cri.go:89] found id: "778b08b9b628cb82a3c8742868fe4b9a4b0dbad3c250600336afae611d54dcfd"
	I1206 10:31:45.094258  375469 cri.go:89] found id: "358be0ebbc23e420f6fde28e811fd30f1d4064a0e72dfb910c4e719a8d628d3b"
	I1206 10:31:45.094263  375469 cri.go:89] found id: "1c178782b46ae3df28453a2dd88fc57e38eb824abae86db11976cc74cf8b87be"
	I1206 10:31:45.094266  375469 cri.go:89] found id: "4f2a86e87c1bf385e11b164e78ea4f4e9844b0534c9bec2d841dfb406fec8a56"
	I1206 10:31:45.094269  375469 cri.go:89] found id: "410f38934f188529387872c7a0345e42f47f3295f320a1765aa24e1b9a271d4d"
	I1206 10:31:45.094274  375469 cri.go:89] found id: "69ffc3958d44bd262b1360fdb7c52481a97a7e588cd4d05224b3704341139dd0"
	I1206 10:31:45.094278  375469 cri.go:89] found id: "66618904a8d73226678429bf63c1faac7f76d45b9de953c282d294fedfc2cfb6"
	I1206 10:31:45.094281  375469 cri.go:89] found id: "9717574a8255200f8dddcf7a2550e63bdb6b4bb664ec25aeb8635f9277183f01"
	I1206 10:31:45.094284  375469 cri.go:89] found id: "6bfecd83e062db176f5124191f88157b58c2a91ba34d40d6c82c9fbd3c6fee47"
	I1206 10:31:45.094288  375469 cri.go:89] found id: ""
	I1206 10:31:45.094362  375469 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 10:31:45.121672  375469 out.go:203] 
	W1206 10:31:45.127606  375469 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:31:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:31:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 10:31:45.127654  375469 out.go:285] * 
	* 
	W1206 10:31:45.143668  375469 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:31:45.148854  375469 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-545880 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (143.66s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.3s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-hv92d" [5cfdb0d7-9dad-4a1f-b136-ffe6d05aeca7] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003775802s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-545880 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-545880 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (290.860971ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:29:21.260711  372994 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:29:21.261555  372994 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:29:21.261608  372994 out.go:374] Setting ErrFile to fd 2...
	I1206 10:29:21.261630  372994 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:29:21.261924  372994 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 10:29:21.262262  372994 mustload.go:66] Loading cluster: addons-545880
	I1206 10:29:21.262806  372994 config.go:182] Loaded profile config "addons-545880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:29:21.262856  372994 addons.go:622] checking whether the cluster is paused
	I1206 10:29:21.263036  372994 config.go:182] Loaded profile config "addons-545880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:29:21.263075  372994 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:29:21.263883  372994 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:29:21.282172  372994 ssh_runner.go:195] Run: systemctl --version
	I1206 10:29:21.282224  372994 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:29:21.300818  372994 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:29:21.412183  372994 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:29:21.412299  372994 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:29:21.450798  372994 cri.go:89] found id: "76e109752916eb227cc2778fc40189f2225fe99abbb5caa1dc492604fa63b088"
	I1206 10:29:21.450819  372994 cri.go:89] found id: "c3f6082a7a0c7c8725c19d46cd708aeb5d4126a349db5fe93809b3ef79169052"
	I1206 10:29:21.450825  372994 cri.go:89] found id: "d59b55d8c46bd316322b59f76bbed7bf1ba7ae09f22a8d7446896bb650747b97"
	I1206 10:29:21.450831  372994 cri.go:89] found id: "ee27785571f1406c526ba554d46111adfc871bf6b5094f993b79d922ed4e4e88"
	I1206 10:29:21.450835  372994 cri.go:89] found id: "b58456cd2cfa54ef5616f519f55a6b7b272d08f96ca019bf4d2f47f9dc581de3"
	I1206 10:29:21.450852  372994 cri.go:89] found id: "77b77d1ecb28a6271e776faf9148345a91cf28a8eb40f9adc7343e6d90864f3a"
	I1206 10:29:21.450864  372994 cri.go:89] found id: "0bb771e3965c7313e7a976270ee1cf4f72f901f19cf787e7ef330577f83ca8b0"
	I1206 10:29:21.450894  372994 cri.go:89] found id: "82475061c71650dc2d5ef1c1b6fb59dc1e8d85ff79c3598c514ad231134b1d1a"
	I1206 10:29:21.450897  372994 cri.go:89] found id: "52d954765a231dbdcd394aa043b7231f3b45f20db74ede3718de67caabeea5a3"
	I1206 10:29:21.450907  372994 cri.go:89] found id: "e292596ad2f80045ad3b706145d35d90657c46cc5300b047c28f357a09003684"
	I1206 10:29:21.450911  372994 cri.go:89] found id: "ab9b79c2c68c1be8095a1a81cd7d444d52723042c6629740074d930656007cfd"
	I1206 10:29:21.450914  372994 cri.go:89] found id: "eaaabe40faa63af2c6b5e0ffb01fdbff88ff53227bb4a4b884fca2db86a16b38"
	I1206 10:29:21.450917  372994 cri.go:89] found id: "aea66f37874913be4b5420f3d08acfb0b6388ccfb25c63270ce6741cf675ba44"
	I1206 10:29:21.450920  372994 cri.go:89] found id: "bbd2d73693ff14927141ea51103bb4d99dce673d1531632ca460362ab91bc129"
	I1206 10:29:21.450923  372994 cri.go:89] found id: "778b08b9b628cb82a3c8742868fe4b9a4b0dbad3c250600336afae611d54dcfd"
	I1206 10:29:21.450932  372994 cri.go:89] found id: "358be0ebbc23e420f6fde28e811fd30f1d4064a0e72dfb910c4e719a8d628d3b"
	I1206 10:29:21.450935  372994 cri.go:89] found id: "1c178782b46ae3df28453a2dd88fc57e38eb824abae86db11976cc74cf8b87be"
	I1206 10:29:21.450939  372994 cri.go:89] found id: "4f2a86e87c1bf385e11b164e78ea4f4e9844b0534c9bec2d841dfb406fec8a56"
	I1206 10:29:21.450943  372994 cri.go:89] found id: "410f38934f188529387872c7a0345e42f47f3295f320a1765aa24e1b9a271d4d"
	I1206 10:29:21.450952  372994 cri.go:89] found id: "69ffc3958d44bd262b1360fdb7c52481a97a7e588cd4d05224b3704341139dd0"
	I1206 10:29:21.450973  372994 cri.go:89] found id: "66618904a8d73226678429bf63c1faac7f76d45b9de953c282d294fedfc2cfb6"
	I1206 10:29:21.450986  372994 cri.go:89] found id: "9717574a8255200f8dddcf7a2550e63bdb6b4bb664ec25aeb8635f9277183f01"
	I1206 10:29:21.450988  372994 cri.go:89] found id: "6bfecd83e062db176f5124191f88157b58c2a91ba34d40d6c82c9fbd3c6fee47"
	I1206 10:29:21.450991  372994 cri.go:89] found id: ""
	I1206 10:29:21.451086  372994 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 10:29:21.476958  372994 out.go:203] 
	W1206 10:29:21.479904  372994 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:29:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:29:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 10:29:21.479930  372994 out.go:285] * 
	* 
	W1206 10:29:21.485278  372994 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:29:21.488300  372994 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-545880 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.30s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.52s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 4.625074ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-6j9l7" [c7953423-aab8-4805-bc6f-57aac150e43a] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005354962s
addons_test.go:463: (dbg) Run:  kubectl --context addons-545880 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-545880 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-545880 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (385.611269ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:29:14.895079  372875 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:29:14.895964  372875 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:29:14.895986  372875 out.go:374] Setting ErrFile to fd 2...
	I1206 10:29:14.895994  372875 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:29:14.896267  372875 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 10:29:14.896557  372875 mustload.go:66] Loading cluster: addons-545880
	I1206 10:29:14.896993  372875 config.go:182] Loaded profile config "addons-545880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:29:14.897013  372875 addons.go:622] checking whether the cluster is paused
	I1206 10:29:14.897122  372875 config.go:182] Loaded profile config "addons-545880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:29:14.897142  372875 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:29:14.897646  372875 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:29:14.923045  372875 ssh_runner.go:195] Run: systemctl --version
	I1206 10:29:14.923118  372875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:29:14.942836  372875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:29:15.091122  372875 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:29:15.091217  372875 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:29:15.151501  372875 cri.go:89] found id: "76e109752916eb227cc2778fc40189f2225fe99abbb5caa1dc492604fa63b088"
	I1206 10:29:15.151566  372875 cri.go:89] found id: "c3f6082a7a0c7c8725c19d46cd708aeb5d4126a349db5fe93809b3ef79169052"
	I1206 10:29:15.151585  372875 cri.go:89] found id: "d59b55d8c46bd316322b59f76bbed7bf1ba7ae09f22a8d7446896bb650747b97"
	I1206 10:29:15.151605  372875 cri.go:89] found id: "ee27785571f1406c526ba554d46111adfc871bf6b5094f993b79d922ed4e4e88"
	I1206 10:29:15.151626  372875 cri.go:89] found id: "b58456cd2cfa54ef5616f519f55a6b7b272d08f96ca019bf4d2f47f9dc581de3"
	I1206 10:29:15.151662  372875 cri.go:89] found id: "77b77d1ecb28a6271e776faf9148345a91cf28a8eb40f9adc7343e6d90864f3a"
	I1206 10:29:15.151691  372875 cri.go:89] found id: "0bb771e3965c7313e7a976270ee1cf4f72f901f19cf787e7ef330577f83ca8b0"
	I1206 10:29:15.151709  372875 cri.go:89] found id: "82475061c71650dc2d5ef1c1b6fb59dc1e8d85ff79c3598c514ad231134b1d1a"
	I1206 10:29:15.151738  372875 cri.go:89] found id: "52d954765a231dbdcd394aa043b7231f3b45f20db74ede3718de67caabeea5a3"
	I1206 10:29:15.151762  372875 cri.go:89] found id: "e292596ad2f80045ad3b706145d35d90657c46cc5300b047c28f357a09003684"
	I1206 10:29:15.151778  372875 cri.go:89] found id: "ab9b79c2c68c1be8095a1a81cd7d444d52723042c6629740074d930656007cfd"
	I1206 10:29:15.151796  372875 cri.go:89] found id: "eaaabe40faa63af2c6b5e0ffb01fdbff88ff53227bb4a4b884fca2db86a16b38"
	I1206 10:29:15.151815  372875 cri.go:89] found id: "aea66f37874913be4b5420f3d08acfb0b6388ccfb25c63270ce6741cf675ba44"
	I1206 10:29:15.151844  372875 cri.go:89] found id: "bbd2d73693ff14927141ea51103bb4d99dce673d1531632ca460362ab91bc129"
	I1206 10:29:15.151867  372875 cri.go:89] found id: "778b08b9b628cb82a3c8742868fe4b9a4b0dbad3c250600336afae611d54dcfd"
	I1206 10:29:15.151888  372875 cri.go:89] found id: "358be0ebbc23e420f6fde28e811fd30f1d4064a0e72dfb910c4e719a8d628d3b"
	I1206 10:29:15.151917  372875 cri.go:89] found id: "1c178782b46ae3df28453a2dd88fc57e38eb824abae86db11976cc74cf8b87be"
	I1206 10:29:15.151948  372875 cri.go:89] found id: "4f2a86e87c1bf385e11b164e78ea4f4e9844b0534c9bec2d841dfb406fec8a56"
	I1206 10:29:15.151971  372875 cri.go:89] found id: "410f38934f188529387872c7a0345e42f47f3295f320a1765aa24e1b9a271d4d"
	I1206 10:29:15.151989  372875 cri.go:89] found id: "69ffc3958d44bd262b1360fdb7c52481a97a7e588cd4d05224b3704341139dd0"
	I1206 10:29:15.152013  372875 cri.go:89] found id: "66618904a8d73226678429bf63c1faac7f76d45b9de953c282d294fedfc2cfb6"
	I1206 10:29:15.152030  372875 cri.go:89] found id: "9717574a8255200f8dddcf7a2550e63bdb6b4bb664ec25aeb8635f9277183f01"
	I1206 10:29:15.152057  372875 cri.go:89] found id: "6bfecd83e062db176f5124191f88157b58c2a91ba34d40d6c82c9fbd3c6fee47"
	I1206 10:29:15.152082  372875 cri.go:89] found id: ""
	I1206 10:29:15.152163  372875 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 10:29:15.180775  372875 out.go:203] 
	W1206 10:29:15.185393  372875 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:29:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:29:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 10:29:15.185487  372875 out.go:285] * 
	* 
	W1206 10:29:15.192934  372875 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:29:15.196447  372875 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-545880 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (6.52s)

                                                
                                    
x
+
TestAddons/parallel/CSI (41.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1206 10:28:55.976217  364855 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1206 10:28:55.980607  364855 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1206 10:28:55.980637  364855 kapi.go:107] duration metric: took 6.495129ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 6.506666ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-545880 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-545880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-545880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-545880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-545880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-545880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-545880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-545880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-545880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-545880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-545880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-545880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-545880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-545880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-545880 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-545880 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-545880 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [edb147e3-5040-4f0a-9be7-cbbf0a7ae651] Pending
helpers_test.go:352: "task-pv-pod" [edb147e3-5040-4f0a-9be7-cbbf0a7ae651] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [edb147e3-5040-4f0a-9be7-cbbf0a7ae651] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.004656762s
addons_test.go:572: (dbg) Run:  kubectl --context addons-545880 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-545880 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-545880 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-545880 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-545880 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-545880 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-545880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-545880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-545880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-545880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-545880 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-545880 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [0620ea7f-6b7a-4df8-92b9-d796bd924792] Pending
helpers_test.go:352: "task-pv-pod-restore" [0620ea7f-6b7a-4df8-92b9-d796bd924792] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [0620ea7f-6b7a-4df8-92b9-d796bd924792] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003424762s
addons_test.go:614: (dbg) Run:  kubectl --context addons-545880 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-545880 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-545880 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-545880 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-545880 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (276.763157ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:29:37.034715  373647 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:29:37.035472  373647 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:29:37.035489  373647 out.go:374] Setting ErrFile to fd 2...
	I1206 10:29:37.035497  373647 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:29:37.035820  373647 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 10:29:37.036942  373647 mustload.go:66] Loading cluster: addons-545880
	I1206 10:29:37.037398  373647 config.go:182] Loaded profile config "addons-545880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:29:37.037420  373647 addons.go:622] checking whether the cluster is paused
	I1206 10:29:37.037579  373647 config.go:182] Loaded profile config "addons-545880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:29:37.037599  373647 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:29:37.038157  373647 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:29:37.056565  373647 ssh_runner.go:195] Run: systemctl --version
	I1206 10:29:37.056625  373647 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:29:37.075465  373647 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:29:37.182003  373647 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:29:37.182088  373647 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:29:37.213528  373647 cri.go:89] found id: "76e109752916eb227cc2778fc40189f2225fe99abbb5caa1dc492604fa63b088"
	I1206 10:29:37.213550  373647 cri.go:89] found id: "c3f6082a7a0c7c8725c19d46cd708aeb5d4126a349db5fe93809b3ef79169052"
	I1206 10:29:37.213555  373647 cri.go:89] found id: "d59b55d8c46bd316322b59f76bbed7bf1ba7ae09f22a8d7446896bb650747b97"
	I1206 10:29:37.213559  373647 cri.go:89] found id: "ee27785571f1406c526ba554d46111adfc871bf6b5094f993b79d922ed4e4e88"
	I1206 10:29:37.213563  373647 cri.go:89] found id: "b58456cd2cfa54ef5616f519f55a6b7b272d08f96ca019bf4d2f47f9dc581de3"
	I1206 10:29:37.213567  373647 cri.go:89] found id: "77b77d1ecb28a6271e776faf9148345a91cf28a8eb40f9adc7343e6d90864f3a"
	I1206 10:29:37.213570  373647 cri.go:89] found id: "0bb771e3965c7313e7a976270ee1cf4f72f901f19cf787e7ef330577f83ca8b0"
	I1206 10:29:37.213574  373647 cri.go:89] found id: "82475061c71650dc2d5ef1c1b6fb59dc1e8d85ff79c3598c514ad231134b1d1a"
	I1206 10:29:37.213577  373647 cri.go:89] found id: "52d954765a231dbdcd394aa043b7231f3b45f20db74ede3718de67caabeea5a3"
	I1206 10:29:37.213586  373647 cri.go:89] found id: "e292596ad2f80045ad3b706145d35d90657c46cc5300b047c28f357a09003684"
	I1206 10:29:37.213590  373647 cri.go:89] found id: "ab9b79c2c68c1be8095a1a81cd7d444d52723042c6629740074d930656007cfd"
	I1206 10:29:37.213593  373647 cri.go:89] found id: "eaaabe40faa63af2c6b5e0ffb01fdbff88ff53227bb4a4b884fca2db86a16b38"
	I1206 10:29:37.213601  373647 cri.go:89] found id: "aea66f37874913be4b5420f3d08acfb0b6388ccfb25c63270ce6741cf675ba44"
	I1206 10:29:37.213604  373647 cri.go:89] found id: "bbd2d73693ff14927141ea51103bb4d99dce673d1531632ca460362ab91bc129"
	I1206 10:29:37.213607  373647 cri.go:89] found id: "778b08b9b628cb82a3c8742868fe4b9a4b0dbad3c250600336afae611d54dcfd"
	I1206 10:29:37.213612  373647 cri.go:89] found id: "358be0ebbc23e420f6fde28e811fd30f1d4064a0e72dfb910c4e719a8d628d3b"
	I1206 10:29:37.213618  373647 cri.go:89] found id: "1c178782b46ae3df28453a2dd88fc57e38eb824abae86db11976cc74cf8b87be"
	I1206 10:29:37.213623  373647 cri.go:89] found id: "4f2a86e87c1bf385e11b164e78ea4f4e9844b0534c9bec2d841dfb406fec8a56"
	I1206 10:29:37.213626  373647 cri.go:89] found id: "410f38934f188529387872c7a0345e42f47f3295f320a1765aa24e1b9a271d4d"
	I1206 10:29:37.213629  373647 cri.go:89] found id: "69ffc3958d44bd262b1360fdb7c52481a97a7e588cd4d05224b3704341139dd0"
	I1206 10:29:37.213634  373647 cri.go:89] found id: "66618904a8d73226678429bf63c1faac7f76d45b9de953c282d294fedfc2cfb6"
	I1206 10:29:37.213637  373647 cri.go:89] found id: "9717574a8255200f8dddcf7a2550e63bdb6b4bb664ec25aeb8635f9277183f01"
	I1206 10:29:37.213644  373647 cri.go:89] found id: "6bfecd83e062db176f5124191f88157b58c2a91ba34d40d6c82c9fbd3c6fee47"
	I1206 10:29:37.213651  373647 cri.go:89] found id: ""
	I1206 10:29:37.213705  373647 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 10:29:37.233957  373647 out.go:203] 
	W1206 10:29:37.236844  373647 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:29:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:29:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 10:29:37.236885  373647 out.go:285] * 
	* 
	W1206 10:29:37.241918  373647 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:29:37.245066  373647 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-545880 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-545880 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-545880 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (275.922609ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:29:37.308834  373689 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:29:37.309754  373689 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:29:37.309799  373689 out.go:374] Setting ErrFile to fd 2...
	I1206 10:29:37.309817  373689 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:29:37.310223  373689 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 10:29:37.310597  373689 mustload.go:66] Loading cluster: addons-545880
	I1206 10:29:37.311034  373689 config.go:182] Loaded profile config "addons-545880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:29:37.311073  373689 addons.go:622] checking whether the cluster is paused
	I1206 10:29:37.311243  373689 config.go:182] Loaded profile config "addons-545880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:29:37.311274  373689 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:29:37.311882  373689 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:29:37.330790  373689 ssh_runner.go:195] Run: systemctl --version
	I1206 10:29:37.330855  373689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:29:37.350109  373689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:29:37.461947  373689 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:29:37.462082  373689 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:29:37.492853  373689 cri.go:89] found id: "76e109752916eb227cc2778fc40189f2225fe99abbb5caa1dc492604fa63b088"
	I1206 10:29:37.492873  373689 cri.go:89] found id: "c3f6082a7a0c7c8725c19d46cd708aeb5d4126a349db5fe93809b3ef79169052"
	I1206 10:29:37.492889  373689 cri.go:89] found id: "d59b55d8c46bd316322b59f76bbed7bf1ba7ae09f22a8d7446896bb650747b97"
	I1206 10:29:37.492893  373689 cri.go:89] found id: "ee27785571f1406c526ba554d46111adfc871bf6b5094f993b79d922ed4e4e88"
	I1206 10:29:37.492897  373689 cri.go:89] found id: "b58456cd2cfa54ef5616f519f55a6b7b272d08f96ca019bf4d2f47f9dc581de3"
	I1206 10:29:37.492901  373689 cri.go:89] found id: "77b77d1ecb28a6271e776faf9148345a91cf28a8eb40f9adc7343e6d90864f3a"
	I1206 10:29:37.492904  373689 cri.go:89] found id: "0bb771e3965c7313e7a976270ee1cf4f72f901f19cf787e7ef330577f83ca8b0"
	I1206 10:29:37.492907  373689 cri.go:89] found id: "82475061c71650dc2d5ef1c1b6fb59dc1e8d85ff79c3598c514ad231134b1d1a"
	I1206 10:29:37.492911  373689 cri.go:89] found id: "52d954765a231dbdcd394aa043b7231f3b45f20db74ede3718de67caabeea5a3"
	I1206 10:29:37.492917  373689 cri.go:89] found id: "e292596ad2f80045ad3b706145d35d90657c46cc5300b047c28f357a09003684"
	I1206 10:29:37.492920  373689 cri.go:89] found id: "ab9b79c2c68c1be8095a1a81cd7d444d52723042c6629740074d930656007cfd"
	I1206 10:29:37.492923  373689 cri.go:89] found id: "eaaabe40faa63af2c6b5e0ffb01fdbff88ff53227bb4a4b884fca2db86a16b38"
	I1206 10:29:37.492927  373689 cri.go:89] found id: "aea66f37874913be4b5420f3d08acfb0b6388ccfb25c63270ce6741cf675ba44"
	I1206 10:29:37.492929  373689 cri.go:89] found id: "bbd2d73693ff14927141ea51103bb4d99dce673d1531632ca460362ab91bc129"
	I1206 10:29:37.492933  373689 cri.go:89] found id: "778b08b9b628cb82a3c8742868fe4b9a4b0dbad3c250600336afae611d54dcfd"
	I1206 10:29:37.492938  373689 cri.go:89] found id: "358be0ebbc23e420f6fde28e811fd30f1d4064a0e72dfb910c4e719a8d628d3b"
	I1206 10:29:37.492941  373689 cri.go:89] found id: "1c178782b46ae3df28453a2dd88fc57e38eb824abae86db11976cc74cf8b87be"
	I1206 10:29:37.492944  373689 cri.go:89] found id: "4f2a86e87c1bf385e11b164e78ea4f4e9844b0534c9bec2d841dfb406fec8a56"
	I1206 10:29:37.492947  373689 cri.go:89] found id: "410f38934f188529387872c7a0345e42f47f3295f320a1765aa24e1b9a271d4d"
	I1206 10:29:37.492950  373689 cri.go:89] found id: "69ffc3958d44bd262b1360fdb7c52481a97a7e588cd4d05224b3704341139dd0"
	I1206 10:29:37.492956  373689 cri.go:89] found id: "66618904a8d73226678429bf63c1faac7f76d45b9de953c282d294fedfc2cfb6"
	I1206 10:29:37.492959  373689 cri.go:89] found id: "9717574a8255200f8dddcf7a2550e63bdb6b4bb664ec25aeb8635f9277183f01"
	I1206 10:29:37.492962  373689 cri.go:89] found id: "6bfecd83e062db176f5124191f88157b58c2a91ba34d40d6c82c9fbd3c6fee47"
	I1206 10:29:37.492964  373689 cri.go:89] found id: ""
	I1206 10:29:37.493028  373689 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 10:29:37.509433  373689 out.go:203] 
	W1206 10:29:37.512536  373689 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:29:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:29:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 10:29:37.512562  373689 out.go:285] * 
	* 
	W1206 10:29:37.517645  373689 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:29:37.520762  373689 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-545880 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (41.56s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-545880 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-545880 --alsologtostderr -v=1: exit status 11 (321.291211ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:28:52.754805  371958 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:28:52.757931  371958 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:28:52.757999  371958 out.go:374] Setting ErrFile to fd 2...
	I1206 10:28:52.758038  371958 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:28:52.758651  371958 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 10:28:52.759129  371958 mustload.go:66] Loading cluster: addons-545880
	I1206 10:28:52.759690  371958 config.go:182] Loaded profile config "addons-545880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:28:52.759733  371958 addons.go:622] checking whether the cluster is paused
	I1206 10:28:52.759910  371958 config.go:182] Loaded profile config "addons-545880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:28:52.759940  371958 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:28:52.760610  371958 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:28:52.798787  371958 ssh_runner.go:195] Run: systemctl --version
	I1206 10:28:52.798844  371958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:28:52.819330  371958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:28:52.926922  371958 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:28:52.927025  371958 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:28:52.958255  371958 cri.go:89] found id: "76e109752916eb227cc2778fc40189f2225fe99abbb5caa1dc492604fa63b088"
	I1206 10:28:52.958281  371958 cri.go:89] found id: "c3f6082a7a0c7c8725c19d46cd708aeb5d4126a349db5fe93809b3ef79169052"
	I1206 10:28:52.958286  371958 cri.go:89] found id: "d59b55d8c46bd316322b59f76bbed7bf1ba7ae09f22a8d7446896bb650747b97"
	I1206 10:28:52.958295  371958 cri.go:89] found id: "ee27785571f1406c526ba554d46111adfc871bf6b5094f993b79d922ed4e4e88"
	I1206 10:28:52.958299  371958 cri.go:89] found id: "b58456cd2cfa54ef5616f519f55a6b7b272d08f96ca019bf4d2f47f9dc581de3"
	I1206 10:28:52.958303  371958 cri.go:89] found id: "77b77d1ecb28a6271e776faf9148345a91cf28a8eb40f9adc7343e6d90864f3a"
	I1206 10:28:52.958306  371958 cri.go:89] found id: "0bb771e3965c7313e7a976270ee1cf4f72f901f19cf787e7ef330577f83ca8b0"
	I1206 10:28:52.958309  371958 cri.go:89] found id: "82475061c71650dc2d5ef1c1b6fb59dc1e8d85ff79c3598c514ad231134b1d1a"
	I1206 10:28:52.958312  371958 cri.go:89] found id: "52d954765a231dbdcd394aa043b7231f3b45f20db74ede3718de67caabeea5a3"
	I1206 10:28:52.958318  371958 cri.go:89] found id: "e292596ad2f80045ad3b706145d35d90657c46cc5300b047c28f357a09003684"
	I1206 10:28:52.958325  371958 cri.go:89] found id: "ab9b79c2c68c1be8095a1a81cd7d444d52723042c6629740074d930656007cfd"
	I1206 10:28:52.958333  371958 cri.go:89] found id: "eaaabe40faa63af2c6b5e0ffb01fdbff88ff53227bb4a4b884fca2db86a16b38"
	I1206 10:28:52.958336  371958 cri.go:89] found id: "aea66f37874913be4b5420f3d08acfb0b6388ccfb25c63270ce6741cf675ba44"
	I1206 10:28:52.958339  371958 cri.go:89] found id: "bbd2d73693ff14927141ea51103bb4d99dce673d1531632ca460362ab91bc129"
	I1206 10:28:52.958349  371958 cri.go:89] found id: "778b08b9b628cb82a3c8742868fe4b9a4b0dbad3c250600336afae611d54dcfd"
	I1206 10:28:52.958355  371958 cri.go:89] found id: "358be0ebbc23e420f6fde28e811fd30f1d4064a0e72dfb910c4e719a8d628d3b"
	I1206 10:28:52.958364  371958 cri.go:89] found id: "1c178782b46ae3df28453a2dd88fc57e38eb824abae86db11976cc74cf8b87be"
	I1206 10:28:52.958368  371958 cri.go:89] found id: "4f2a86e87c1bf385e11b164e78ea4f4e9844b0534c9bec2d841dfb406fec8a56"
	I1206 10:28:52.958371  371958 cri.go:89] found id: "410f38934f188529387872c7a0345e42f47f3295f320a1765aa24e1b9a271d4d"
	I1206 10:28:52.958374  371958 cri.go:89] found id: "69ffc3958d44bd262b1360fdb7c52481a97a7e588cd4d05224b3704341139dd0"
	I1206 10:28:52.958379  371958 cri.go:89] found id: "66618904a8d73226678429bf63c1faac7f76d45b9de953c282d294fedfc2cfb6"
	I1206 10:28:52.958382  371958 cri.go:89] found id: "9717574a8255200f8dddcf7a2550e63bdb6b4bb664ec25aeb8635f9277183f01"
	I1206 10:28:52.958386  371958 cri.go:89] found id: "6bfecd83e062db176f5124191f88157b58c2a91ba34d40d6c82c9fbd3c6fee47"
	I1206 10:28:52.958395  371958 cri.go:89] found id: ""
	I1206 10:28:52.958452  371958 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 10:28:52.974380  371958 out.go:203] 
	W1206 10:28:52.977473  371958 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:28:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:28:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 10:28:52.977514  371958 out.go:285] * 
	* 
	W1206 10:28:52.982556  371958 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:28:52.985637  371958 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-545880 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-545880
helpers_test.go:243: (dbg) docker inspect addons-545880:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a93a155d5df0f69f3f6899c99e43c4171f82074157d733e88fc9accf1c14279f",
	        "Created": "2025-12-06T10:26:24.660997861Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 366239,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T10:26:24.727779728Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/a93a155d5df0f69f3f6899c99e43c4171f82074157d733e88fc9accf1c14279f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a93a155d5df0f69f3f6899c99e43c4171f82074157d733e88fc9accf1c14279f/hostname",
	        "HostsPath": "/var/lib/docker/containers/a93a155d5df0f69f3f6899c99e43c4171f82074157d733e88fc9accf1c14279f/hosts",
	        "LogPath": "/var/lib/docker/containers/a93a155d5df0f69f3f6899c99e43c4171f82074157d733e88fc9accf1c14279f/a93a155d5df0f69f3f6899c99e43c4171f82074157d733e88fc9accf1c14279f-json.log",
	        "Name": "/addons-545880",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-545880:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-545880",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a93a155d5df0f69f3f6899c99e43c4171f82074157d733e88fc9accf1c14279f",
	                "LowerDir": "/var/lib/docker/overlay2/2a20fabcb20792bc91f77cc1658dd9acd97c3c9361377ddd08683b2d2c3427d3-init/diff:/var/lib/docker/overlay2/5011226d55616c9977b14c1fe617d1302fe59373df05ce8ec6e21b79143a1c57/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2a20fabcb20792bc91f77cc1658dd9acd97c3c9361377ddd08683b2d2c3427d3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2a20fabcb20792bc91f77cc1658dd9acd97c3c9361377ddd08683b2d2c3427d3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2a20fabcb20792bc91f77cc1658dd9acd97c3c9361377ddd08683b2d2c3427d3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-545880",
	                "Source": "/var/lib/docker/volumes/addons-545880/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-545880",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-545880",
	                "name.minikube.sigs.k8s.io": "addons-545880",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "435281ad4d0520d430fa5da9ba4f57070020402866438f00f22ba5bdcfb57a1d",
	            "SandboxKey": "/var/run/docker/netns/435281ad4d05",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-545880": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:76:fe:d0:d1:fe",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b5ca69fa4acc00d6f9d4dbd4b897fc14fad38da4fafffd1234ebd25cc9478e9c",
	                    "EndpointID": "4e5a0228af364d4e6cf94ca1f26538ee6e7d6addf7f6f4e10b40700a3f0cf517",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-545880",
	                        "a93a155d5df0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-545880 -n addons-545880
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-545880 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-545880 logs -n 25: (1.48166267s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-807014 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-807014   │ jenkins │ v1.37.0 │ 06 Dec 25 10:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 06 Dec 25 10:25 UTC │ 06 Dec 25 10:25 UTC │
	│ delete  │ -p download-only-807014                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-807014   │ jenkins │ v1.37.0 │ 06 Dec 25 10:25 UTC │ 06 Dec 25 10:25 UTC │
	│ start   │ -o=json --download-only -p download-only-154072 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-154072   │ jenkins │ v1.37.0 │ 06 Dec 25 10:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 06 Dec 25 10:25 UTC │ 06 Dec 25 10:25 UTC │
	│ delete  │ -p download-only-154072                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-154072   │ jenkins │ v1.37.0 │ 06 Dec 25 10:25 UTC │ 06 Dec 25 10:25 UTC │
	│ start   │ -o=json --download-only -p download-only-171878 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                         │ download-only-171878   │ jenkins │ v1.37.0 │ 06 Dec 25 10:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 06 Dec 25 10:25 UTC │ 06 Dec 25 10:25 UTC │
	│ delete  │ -p download-only-171878                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-171878   │ jenkins │ v1.37.0 │ 06 Dec 25 10:25 UTC │ 06 Dec 25 10:25 UTC │
	│ delete  │ -p download-only-807014                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-807014   │ jenkins │ v1.37.0 │ 06 Dec 25 10:25 UTC │ 06 Dec 25 10:25 UTC │
	│ delete  │ -p download-only-154072                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-154072   │ jenkins │ v1.37.0 │ 06 Dec 25 10:25 UTC │ 06 Dec 25 10:25 UTC │
	│ delete  │ -p download-only-171878                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-171878   │ jenkins │ v1.37.0 │ 06 Dec 25 10:25 UTC │ 06 Dec 25 10:25 UTC │
	│ start   │ --download-only -p download-docker-566856 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-566856 │ jenkins │ v1.37.0 │ 06 Dec 25 10:25 UTC │                     │
	│ delete  │ -p download-docker-566856                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-566856 │ jenkins │ v1.37.0 │ 06 Dec 25 10:25 UTC │ 06 Dec 25 10:25 UTC │
	│ start   │ --download-only -p binary-mirror-657674 --alsologtostderr --binary-mirror http://127.0.0.1:46333 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-657674   │ jenkins │ v1.37.0 │ 06 Dec 25 10:25 UTC │                     │
	│ delete  │ -p binary-mirror-657674                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-657674   │ jenkins │ v1.37.0 │ 06 Dec 25 10:25 UTC │ 06 Dec 25 10:25 UTC │
	│ addons  │ enable dashboard -p addons-545880                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-545880          │ jenkins │ v1.37.0 │ 06 Dec 25 10:25 UTC │                     │
	│ addons  │ disable dashboard -p addons-545880                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-545880          │ jenkins │ v1.37.0 │ 06 Dec 25 10:25 UTC │                     │
	│ start   │ -p addons-545880 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-545880          │ jenkins │ v1.37.0 │ 06 Dec 25 10:25 UTC │ 06 Dec 25 10:28 UTC │
	│ addons  │ addons-545880 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-545880          │ jenkins │ v1.37.0 │ 06 Dec 25 10:28 UTC │                     │
	│ addons  │ addons-545880 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-545880          │ jenkins │ v1.37.0 │ 06 Dec 25 10:28 UTC │                     │
	│ addons  │ enable headlamp -p addons-545880 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-545880          │ jenkins │ v1.37.0 │ 06 Dec 25 10:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:25:59
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:25:59.831901  365843 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:25:59.832034  365843 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:25:59.832044  365843 out.go:374] Setting ErrFile to fd 2...
	I1206 10:25:59.832050  365843 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:25:59.832321  365843 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 10:25:59.832813  365843 out.go:368] Setting JSON to false
	I1206 10:25:59.833652  365843 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":7711,"bootTime":1765009049,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 10:25:59.833731  365843 start.go:143] virtualization:  
	I1206 10:25:59.837083  365843 out.go:179] * [addons-545880] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:25:59.840935  365843 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 10:25:59.841048  365843 notify.go:221] Checking for updates...
	I1206 10:25:59.846845  365843 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:25:59.849772  365843 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 10:25:59.852744  365843 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-362985/.minikube
	I1206 10:25:59.855594  365843 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:25:59.858496  365843 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:25:59.861507  365843 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:25:59.889734  365843 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:25:59.889878  365843 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:25:59.948079  365843 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-06 10:25:59.938847003 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:25:59.948197  365843 docker.go:319] overlay module found
	I1206 10:25:59.951279  365843 out.go:179] * Using the docker driver based on user configuration
	I1206 10:25:59.954241  365843 start.go:309] selected driver: docker
	I1206 10:25:59.954267  365843 start.go:927] validating driver "docker" against <nil>
	I1206 10:25:59.954287  365843 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:25:59.955056  365843 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:26:00.076464  365843 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-06 10:26:00.034895678 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:26:00.076679  365843 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 10:26:00.076929  365843 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 10:26:00.083865  365843 out.go:179] * Using Docker driver with root privileges
	I1206 10:26:00.086971  365843 cni.go:84] Creating CNI manager for ""
	I1206 10:26:00.087065  365843 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:26:00.087080  365843 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 10:26:00.087177  365843 start.go:353] cluster config:
	{Name:addons-545880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-545880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1206 10:26:00.092447  365843 out.go:179] * Starting "addons-545880" primary control-plane node in "addons-545880" cluster
	I1206 10:26:00.095350  365843 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 10:26:00.098397  365843 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:26:00.101333  365843 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:26:00.101380  365843 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 10:26:00.101568  365843 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1206 10:26:00.101586  365843 cache.go:65] Caching tarball of preloaded images
	I1206 10:26:00.101689  365843 preload.go:238] Found /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1206 10:26:00.101707  365843 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1206 10:26:00.102099  365843 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/config.json ...
	I1206 10:26:00.102136  365843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/config.json: {Name:mk7e12d5acc9c9b0ed556f29d1343f0847944d5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:26:00.161923  365843 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 to local cache
	I1206 10:26:00.162079  365843 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory
	I1206 10:26:00.162112  365843 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory, skipping pull
	I1206 10:26:00.162117  365843 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in cache, skipping pull
	I1206 10:26:00.162126  365843 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 as a tarball
	I1206 10:26:00.162131  365843 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 from local cache
	I1206 10:26:18.511593  365843 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 from cached tarball
	I1206 10:26:18.511632  365843 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:26:18.511687  365843 start.go:360] acquireMachinesLock for addons-545880: {Name:mkbddccc20b56c014d20069484dc6aca478c0df0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:26:18.511811  365843 start.go:364] duration metric: took 104.551µs to acquireMachinesLock for "addons-545880"
	I1206 10:26:18.511837  365843 start.go:93] Provisioning new machine with config: &{Name:addons-545880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-545880 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 10:26:18.511907  365843 start.go:125] createHost starting for "" (driver="docker")
	I1206 10:26:18.515412  365843 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1206 10:26:18.515659  365843 start.go:159] libmachine.API.Create for "addons-545880" (driver="docker")
	I1206 10:26:18.515700  365843 client.go:173] LocalClient.Create starting
	I1206 10:26:18.515815  365843 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem
	I1206 10:26:18.697389  365843 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem
	I1206 10:26:18.943080  365843 cli_runner.go:164] Run: docker network inspect addons-545880 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 10:26:18.967774  365843 cli_runner.go:211] docker network inspect addons-545880 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 10:26:18.967859  365843 network_create.go:284] running [docker network inspect addons-545880] to gather additional debugging logs...
	I1206 10:26:18.967882  365843 cli_runner.go:164] Run: docker network inspect addons-545880
	W1206 10:26:18.983985  365843 cli_runner.go:211] docker network inspect addons-545880 returned with exit code 1
	I1206 10:26:18.984017  365843 network_create.go:287] error running [docker network inspect addons-545880]: docker network inspect addons-545880: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-545880 not found
	I1206 10:26:18.984038  365843 network_create.go:289] output of [docker network inspect addons-545880]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-545880 not found
	
	** /stderr **
	I1206 10:26:18.984134  365843 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:26:19.001199  365843 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001997bb0}
	I1206 10:26:19.001265  365843 network_create.go:124] attempt to create docker network addons-545880 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1206 10:26:19.001337  365843 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-545880 addons-545880
	I1206 10:26:19.068372  365843 network_create.go:108] docker network addons-545880 192.168.49.0/24 created
	I1206 10:26:19.068406  365843 kic.go:121] calculated static IP "192.168.49.2" for the "addons-545880" container
	I1206 10:26:19.068511  365843 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 10:26:19.084852  365843 cli_runner.go:164] Run: docker volume create addons-545880 --label name.minikube.sigs.k8s.io=addons-545880 --label created_by.minikube.sigs.k8s.io=true
	I1206 10:26:19.103544  365843 oci.go:103] Successfully created a docker volume addons-545880
	I1206 10:26:19.103648  365843 cli_runner.go:164] Run: docker run --rm --name addons-545880-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-545880 --entrypoint /usr/bin/test -v addons-545880:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 10:26:20.646987  365843 cli_runner.go:217] Completed: docker run --rm --name addons-545880-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-545880 --entrypoint /usr/bin/test -v addons-545880:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib: (1.543302795s)
	I1206 10:26:20.647022  365843 oci.go:107] Successfully prepared a docker volume addons-545880
	I1206 10:26:20.647066  365843 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 10:26:20.647092  365843 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 10:26:20.647161  365843 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-545880:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 10:26:24.583440  365843 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-545880:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (3.936235065s)
	I1206 10:26:24.583476  365843 kic.go:203] duration metric: took 3.936392629s to extract preloaded images to volume ...
	W1206 10:26:24.583615  365843 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1206 10:26:24.583724  365843 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 10:26:24.646036  365843 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-545880 --name addons-545880 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-545880 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-545880 --network addons-545880 --ip 192.168.49.2 --volume addons-545880:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1206 10:26:24.957434  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Running}}
	I1206 10:26:24.980192  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:26:25.001898  365843 cli_runner.go:164] Run: docker exec addons-545880 stat /var/lib/dpkg/alternatives/iptables
	I1206 10:26:25.056757  365843 oci.go:144] the created container "addons-545880" has a running status.
	I1206 10:26:25.056786  365843 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa...
	I1206 10:26:25.314150  365843 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 10:26:25.334819  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:26:25.356817  365843 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 10:26:25.356837  365843 kic_runner.go:114] Args: [docker exec --privileged addons-545880 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 10:26:25.425964  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:26:25.455070  365843 machine.go:94] provisionDockerMachine start ...
	I1206 10:26:25.455174  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:26:25.482546  365843 main.go:143] libmachine: Using SSH client type: native
	I1206 10:26:25.482886  365843 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1206 10:26:25.482900  365843 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:26:25.483648  365843 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1206 10:26:28.634993  365843 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-545880
	
	I1206 10:26:28.635016  365843 ubuntu.go:182] provisioning hostname "addons-545880"
	I1206 10:26:28.635080  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:26:28.652678  365843 main.go:143] libmachine: Using SSH client type: native
	I1206 10:26:28.652999  365843 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1206 10:26:28.653015  365843 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-545880 && echo "addons-545880" | sudo tee /etc/hostname
	I1206 10:26:28.812977  365843 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-545880
	
	I1206 10:26:28.813063  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:26:28.831988  365843 main.go:143] libmachine: Using SSH client type: native
	I1206 10:26:28.832328  365843 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1206 10:26:28.832350  365843 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-545880' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-545880/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-545880' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:26:28.983825  365843 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:26:28.983857  365843 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-362985/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-362985/.minikube}
	I1206 10:26:28.983877  365843 ubuntu.go:190] setting up certificates
	I1206 10:26:28.983887  365843 provision.go:84] configureAuth start
	I1206 10:26:28.983946  365843 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-545880
	I1206 10:26:29.000386  365843 provision.go:143] copyHostCerts
	I1206 10:26:29.000482  365843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem (1082 bytes)
	I1206 10:26:29.000625  365843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem (1123 bytes)
	I1206 10:26:29.000690  365843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem (1679 bytes)
	I1206 10:26:29.000947  365843 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem org=jenkins.addons-545880 san=[127.0.0.1 192.168.49.2 addons-545880 localhost minikube]
	I1206 10:26:29.513592  365843 provision.go:177] copyRemoteCerts
	I1206 10:26:29.513666  365843 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:26:29.513708  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:26:29.531256  365843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:26:29.635406  365843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 10:26:29.653517  365843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1206 10:26:29.672860  365843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 10:26:29.692361  365843 provision.go:87] duration metric: took 708.449519ms to configureAuth
	I1206 10:26:29.692436  365843 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:26:29.692646  365843 config.go:182] Loaded profile config "addons-545880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:26:29.692762  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:26:29.710530  365843 main.go:143] libmachine: Using SSH client type: native
	I1206 10:26:29.710851  365843 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1206 10:26:29.710875  365843 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 10:26:30.051304  365843 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 10:26:30.051333  365843 machine.go:97] duration metric: took 4.596242148s to provisionDockerMachine
	I1206 10:26:30.051346  365843 client.go:176] duration metric: took 11.535634515s to LocalClient.Create
	I1206 10:26:30.051360  365843 start.go:167] duration metric: took 11.535703529s to libmachine.API.Create "addons-545880"
	I1206 10:26:30.051368  365843 start.go:293] postStartSetup for "addons-545880" (driver="docker")
	I1206 10:26:30.051412  365843 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:26:30.051618  365843 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:26:30.051671  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:26:30.094193  365843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:26:30.236130  365843 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:26:30.240076  365843 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:26:30.240106  365843 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:26:30.240119  365843 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-362985/.minikube/addons for local assets ...
	I1206 10:26:30.240212  365843 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-362985/.minikube/files for local assets ...
	I1206 10:26:30.240237  365843 start.go:296] duration metric: took 188.863853ms for postStartSetup
	I1206 10:26:30.240613  365843 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-545880
	I1206 10:26:30.259182  365843 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/config.json ...
	I1206 10:26:30.259533  365843 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:26:30.259588  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:26:30.277753  365843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:26:30.381242  365843 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:26:30.386082  365843 start.go:128] duration metric: took 11.874158588s to createHost
	I1206 10:26:30.386110  365843 start.go:83] releasing machines lock for "addons-545880", held for 11.874290083s
	I1206 10:26:30.386180  365843 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-545880
	I1206 10:26:30.402938  365843 ssh_runner.go:195] Run: cat /version.json
	I1206 10:26:30.402964  365843 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:26:30.402993  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:26:30.403034  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:26:30.421523  365843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:26:30.425012  365843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:26:30.611263  365843 ssh_runner.go:195] Run: systemctl --version
	I1206 10:26:30.617625  365843 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 10:26:30.659043  365843 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 10:26:30.663560  365843 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:26:30.663640  365843 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:26:30.694683  365843 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1206 10:26:30.694711  365843 start.go:496] detecting cgroup driver to use...
	I1206 10:26:30.694746  365843 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:26:30.694799  365843 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 10:26:30.712307  365843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 10:26:30.725382  365843 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:26:30.725445  365843 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:26:30.743114  365843 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:26:30.762434  365843 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:26:30.878831  365843 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:26:31.009808  365843 docker.go:234] disabling docker service ...
	I1206 10:26:31.009943  365843 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:26:31.032972  365843 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:26:31.046702  365843 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:26:31.166275  365843 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:26:31.283189  365843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:26:31.297096  365843 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:26:31.311898  365843 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 10:26:31.311971  365843 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:26:31.321664  365843 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 10:26:31.321738  365843 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:26:31.331290  365843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:26:31.340666  365843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:26:31.350131  365843 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:26:31.358805  365843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:26:31.367848  365843 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:26:31.381670  365843 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:26:31.391503  365843 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:26:31.399507  365843 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:26:31.407141  365843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:26:31.513722  365843 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 10:26:31.675113  365843 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 10:26:31.675276  365843 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 10:26:31.679130  365843 start.go:564] Will wait 60s for crictl version
	I1206 10:26:31.679230  365843 ssh_runner.go:195] Run: which crictl
	I1206 10:26:31.682765  365843 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:26:31.708304  365843 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 10:26:31.708449  365843 ssh_runner.go:195] Run: crio --version
	I1206 10:26:31.738317  365843 ssh_runner.go:195] Run: crio --version
	I1206 10:26:31.773296  365843 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1206 10:26:31.776156  365843 cli_runner.go:164] Run: docker network inspect addons-545880 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:26:31.792593  365843 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 10:26:31.796338  365843 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 10:26:31.806373  365843 kubeadm.go:884] updating cluster {Name:addons-545880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-545880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:26:31.806495  365843 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 10:26:31.806552  365843 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:26:31.845347  365843 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:26:31.845373  365843 crio.go:433] Images already preloaded, skipping extraction
	I1206 10:26:31.845439  365843 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:26:31.871214  365843 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:26:31.871242  365843 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:26:31.871250  365843 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1206 10:26:31.871357  365843 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-545880 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-545880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:26:31.871507  365843 ssh_runner.go:195] Run: crio config
	I1206 10:26:31.923043  365843 cni.go:84] Creating CNI manager for ""
	I1206 10:26:31.923075  365843 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:26:31.923095  365843 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 10:26:31.923119  365843 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-545880 NodeName:addons-545880 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:26:31.923277  365843 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-545880"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:26:31.923381  365843 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 10:26:31.931106  365843 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:26:31.931228  365843 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:26:31.938984  365843 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1206 10:26:31.952068  365843 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 10:26:31.965345  365843 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1206 10:26:31.978869  365843 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:26:31.982501  365843 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 10:26:31.993850  365843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:26:32.104139  365843 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:26:32.120124  365843 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880 for IP: 192.168.49.2
	I1206 10:26:32.120199  365843 certs.go:195] generating shared ca certs ...
	I1206 10:26:32.120230  365843 certs.go:227] acquiring lock for ca certs: {Name:mke2ec61a37b6f3abbcbeb9abd23d6a19d011dd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:26:32.120419  365843 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.key
	I1206 10:26:32.254819  365843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt ...
	I1206 10:26:32.254857  365843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt: {Name:mkf771137823ff623c6ce260a556cc9c6ad707d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:26:32.255088  365843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-362985/.minikube/ca.key ...
	I1206 10:26:32.255105  365843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/.minikube/ca.key: {Name:mk197c4654e2009e868e924392a5b050891b363d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:26:32.255198  365843 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.key
	I1206 10:26:32.685089  365843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.crt ...
	I1206 10:26:32.685131  365843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.crt: {Name:mk4b6bd64e89bf8d549094c3ac332c5535e71c3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:26:32.685320  365843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.key ...
	I1206 10:26:32.685335  365843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.key: {Name:mk2a9ab3182afa855d079fe41df0313a5e14ad3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:26:32.685408  365843 certs.go:257] generating profile certs ...
	I1206 10:26:32.685470  365843 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.key
	I1206 10:26:32.685485  365843 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.crt with IP's: []
	I1206 10:26:33.138918  365843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.crt ...
	I1206 10:26:33.138954  365843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.crt: {Name:mkb1f18f04909fc72b9bf1587c10de224fd47072 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:26:33.139144  365843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.key ...
	I1206 10:26:33.139157  365843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.key: {Name:mk166fa649e391da7c0ce7f16eb96f9706703743 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:26:33.139240  365843 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/apiserver.key.1afdede4
	I1206 10:26:33.139262  365843 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/apiserver.crt.1afdede4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1206 10:26:34.173485  365843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/apiserver.crt.1afdede4 ...
	I1206 10:26:34.173521  365843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/apiserver.crt.1afdede4: {Name:mk0415fbe505048311acc9e1ccabd0da9ef1010b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:26:34.173710  365843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/apiserver.key.1afdede4 ...
	I1206 10:26:34.173724  365843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/apiserver.key.1afdede4: {Name:mke5528a50033758cc15a754271a4aa845ad8d3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:26:34.173803  365843 certs.go:382] copying /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/apiserver.crt.1afdede4 -> /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/apiserver.crt
	I1206 10:26:34.173885  365843 certs.go:386] copying /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/apiserver.key.1afdede4 -> /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/apiserver.key
	I1206 10:26:34.173942  365843 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/proxy-client.key
	I1206 10:26:34.173963  365843 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/proxy-client.crt with IP's: []
	I1206 10:26:34.326260  365843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/proxy-client.crt ...
	I1206 10:26:34.326291  365843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/proxy-client.crt: {Name:mk9b6c0f834a5853575a3384d35d223d52096489 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:26:34.327229  365843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/proxy-client.key ...
	I1206 10:26:34.327248  365843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/proxy-client.key: {Name:mk46d66594947c08bd137d9b69279b27e86a64c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:26:34.327496  365843 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 10:26:34.327554  365843 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem (1082 bytes)
	I1206 10:26:34.327590  365843 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:26:34.327621  365843 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem (1679 bytes)
	I1206 10:26:34.328194  365843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:26:34.347478  365843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 10:26:34.366452  365843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:26:34.386403  365843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 10:26:34.405102  365843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1206 10:26:34.422744  365843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 10:26:34.440462  365843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:26:34.459053  365843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 10:26:34.481101  365843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:26:34.500117  365843 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:26:34.516869  365843 ssh_runner.go:195] Run: openssl version
	I1206 10:26:34.527538  365843 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:26:34.536477  365843 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:26:34.544345  365843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:26:34.548207  365843 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:26 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:26:34.548272  365843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:26:34.589739  365843 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:26:34.597397  365843 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 10:26:34.604884  365843 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:26:34.608686  365843 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 10:26:34.608739  365843 kubeadm.go:401] StartCluster: {Name:addons-545880 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-545880 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:26:34.608826  365843 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:26:34.608898  365843 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:26:34.638249  365843 cri.go:89] found id: ""
	I1206 10:26:34.638320  365843 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:26:34.646396  365843 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 10:26:34.654345  365843 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:26:34.654408  365843 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:26:34.662204  365843 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:26:34.662224  365843 kubeadm.go:158] found existing configuration files:
	
	I1206 10:26:34.662293  365843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 10:26:34.669833  365843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:26:34.669898  365843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:26:34.677263  365843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 10:26:34.685094  365843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:26:34.685159  365843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:26:34.692418  365843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 10:26:34.700411  365843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:26:34.700498  365843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:26:34.707954  365843 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 10:26:34.715702  365843 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:26:34.715823  365843 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:26:34.723328  365843 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:26:34.764310  365843 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1206 10:26:34.764642  365843 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:26:34.793983  365843 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:26:34.794076  365843 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:26:34.794145  365843 kubeadm.go:319] OS: Linux
	I1206 10:26:34.794210  365843 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:26:34.794299  365843 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:26:34.794371  365843 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:26:34.794492  365843 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:26:34.794578  365843 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:26:34.794649  365843 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:26:34.794720  365843 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:26:34.794789  365843 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:26:34.794865  365843 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:26:34.862625  365843 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:26:34.862815  365843 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:26:34.862963  365843 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:26:34.870565  365843 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:26:34.873867  365843 out.go:252]   - Generating certificates and keys ...
	I1206 10:26:34.874290  365843 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:26:34.874388  365843 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:26:35.882071  365843 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 10:26:36.430227  365843 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 10:26:36.654976  365843 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 10:26:37.161112  365843 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 10:26:37.556781  365843 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 10:26:37.557172  365843 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-545880 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1206 10:26:38.246505  365843 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 10:26:38.247091  365843 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-545880 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1206 10:26:38.635365  365843 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 10:26:38.896700  365843 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 10:26:39.122752  365843 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 10:26:39.122833  365843 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:26:39.481677  365843 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:26:40.114833  365843 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:26:40.618194  365843 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:26:41.653071  365843 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:26:42.157982  365843 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:26:42.158723  365843 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:26:42.162918  365843 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:26:42.172800  365843 out.go:252]   - Booting up control plane ...
	I1206 10:26:42.172923  365843 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:26:42.173242  365843 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:26:42.173320  365843 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:26:42.193704  365843 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:26:42.194114  365843 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:26:42.202175  365843 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:26:42.202666  365843 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:26:42.202717  365843 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:26:42.347864  365843 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:26:42.347985  365843 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:26:43.343471  365843 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000757071s
	I1206 10:26:43.347408  365843 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 10:26:43.347525  365843 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1206 10:26:43.347628  365843 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 10:26:43.347719  365843 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 10:26:46.305915  365843 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.95796749s
	I1206 10:26:47.502343  365843 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.155039186s
	I1206 10:26:49.349238  365843 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001911377s
	I1206 10:26:49.380240  365843 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1206 10:26:49.399646  365843 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1206 10:26:49.414630  365843 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1206 10:26:49.414859  365843 kubeadm.go:319] [mark-control-plane] Marking the node addons-545880 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1206 10:26:49.431137  365843 kubeadm.go:319] [bootstrap-token] Using token: r0k11y.th9qih0pbh9vle2v
	I1206 10:26:49.434234  365843 out.go:252]   - Configuring RBAC rules ...
	I1206 10:26:49.434385  365843 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1206 10:26:49.439623  365843 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1206 10:26:49.450425  365843 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1206 10:26:49.455092  365843 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1206 10:26:49.459448  365843 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1206 10:26:49.464084  365843 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1206 10:26:49.755716  365843 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1206 10:26:50.196175  365843 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1206 10:26:50.757506  365843 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1206 10:26:50.757530  365843 kubeadm.go:319] 
	I1206 10:26:50.757593  365843 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1206 10:26:50.757600  365843 kubeadm.go:319] 
	I1206 10:26:50.757694  365843 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1206 10:26:50.757714  365843 kubeadm.go:319] 
	I1206 10:26:50.757740  365843 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1206 10:26:50.757798  365843 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1206 10:26:50.757855  365843 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1206 10:26:50.757863  365843 kubeadm.go:319] 
	I1206 10:26:50.757931  365843 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1206 10:26:50.757940  365843 kubeadm.go:319] 
	I1206 10:26:50.757985  365843 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1206 10:26:50.757994  365843 kubeadm.go:319] 
	I1206 10:26:50.758043  365843 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1206 10:26:50.758117  365843 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1206 10:26:50.758185  365843 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1206 10:26:50.758191  365843 kubeadm.go:319] 
	I1206 10:26:50.758270  365843 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1206 10:26:50.758347  365843 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1206 10:26:50.758357  365843 kubeadm.go:319] 
	I1206 10:26:50.758446  365843 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token r0k11y.th9qih0pbh9vle2v \
	I1206 10:26:50.758547  365843 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:89a0fbd3aa916cfb970075c0d704943ded4fe0d81a4b725dbb90c39f29d66bfe \
	I1206 10:26:50.758568  365843 kubeadm.go:319] 	--control-plane 
	I1206 10:26:50.758574  365843 kubeadm.go:319] 
	I1206 10:26:50.758655  365843 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1206 10:26:50.758662  365843 kubeadm.go:319] 
	I1206 10:26:50.758740  365843 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token r0k11y.th9qih0pbh9vle2v \
	I1206 10:26:50.758839  365843 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:89a0fbd3aa916cfb970075c0d704943ded4fe0d81a4b725dbb90c39f29d66bfe 
	I1206 10:26:50.762577  365843 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1206 10:26:50.762798  365843 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 10:26:50.762901  365843 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:26:50.762923  365843 cni.go:84] Creating CNI manager for ""
	I1206 10:26:50.762933  365843 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:26:50.766141  365843 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1206 10:26:50.769066  365843 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1206 10:26:50.773130  365843 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1206 10:26:50.773156  365843 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1206 10:26:50.786635  365843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1206 10:26:51.125762  365843 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1206 10:26:51.125858  365843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:26:51.125917  365843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-545880 minikube.k8s.io/updated_at=2025_12_06T10_26_51_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4 minikube.k8s.io/name=addons-545880 minikube.k8s.io/primary=true
	I1206 10:26:51.295584  365843 ops.go:34] apiserver oom_adj: -16
	I1206 10:26:51.295711  365843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:26:51.795823  365843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:26:52.295852  365843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:26:52.796448  365843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:26:53.296235  365843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:26:53.796440  365843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:26:54.296609  365843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:26:54.795906  365843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:26:55.295904  365843 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1206 10:26:55.394733  365843 kubeadm.go:1114] duration metric: took 4.268933936s to wait for elevateKubeSystemPrivileges
	I1206 10:26:55.394771  365843 kubeadm.go:403] duration metric: took 20.786038447s to StartCluster
	I1206 10:26:55.394789  365843 settings.go:142] acquiring lock: {Name:mk789e01bfd4ab9fa1e2a8415fa99b570b26926a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:26:55.394908  365843 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 10:26:55.395316  365843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/kubeconfig: {Name:mk779651834cfbdc6f0b5e8f5a9abc0f05106181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:26:55.395567  365843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1206 10:26:55.395596  365843 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 10:26:55.395834  365843 config.go:182] Loaded profile config "addons-545880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:26:55.395866  365843 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1206 10:26:55.395942  365843 addons.go:70] Setting yakd=true in profile "addons-545880"
	I1206 10:26:55.395958  365843 addons.go:239] Setting addon yakd=true in "addons-545880"
	I1206 10:26:55.395984  365843 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:26:55.396007  365843 addons.go:70] Setting inspektor-gadget=true in profile "addons-545880"
	I1206 10:26:55.396024  365843 addons.go:239] Setting addon inspektor-gadget=true in "addons-545880"
	I1206 10:26:55.396044  365843 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:26:55.396446  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:26:55.396539  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:26:55.396868  365843 addons.go:70] Setting metrics-server=true in profile "addons-545880"
	I1206 10:26:55.396889  365843 addons.go:239] Setting addon metrics-server=true in "addons-545880"
	I1206 10:26:55.396911  365843 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:26:55.397330  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:26:55.399757  365843 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-545880"
	I1206 10:26:55.399792  365843 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-545880"
	I1206 10:26:55.399929  365843 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:26:55.400413  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:26:55.400682  365843 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-545880"
	I1206 10:26:55.403919  365843 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-545880"
	I1206 10:26:55.403958  365843 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:26:55.404427  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:26:55.403425  365843 addons.go:70] Setting registry=true in profile "addons-545880"
	I1206 10:26:55.413996  365843 addons.go:239] Setting addon registry=true in "addons-545880"
	I1206 10:26:55.414059  365843 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:26:55.414690  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:26:55.403441  365843 addons.go:70] Setting registry-creds=true in profile "addons-545880"
	I1206 10:26:55.423408  365843 addons.go:239] Setting addon registry-creds=true in "addons-545880"
	I1206 10:26:55.423519  365843 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:26:55.403595  365843 addons.go:70] Setting storage-provisioner=true in profile "addons-545880"
	I1206 10:26:55.403601  365843 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-545880"
	I1206 10:26:55.403605  365843 addons.go:70] Setting volcano=true in profile "addons-545880"
	I1206 10:26:55.403608  365843 addons.go:70] Setting volumesnapshots=true in profile "addons-545880"
	I1206 10:26:55.403700  365843 out.go:179] * Verifying Kubernetes components...
	I1206 10:26:55.403712  365843 addons.go:70] Setting gcp-auth=true in profile "addons-545880"
	I1206 10:26:55.403717  365843 addons.go:70] Setting cloud-spanner=true in profile "addons-545880"
	I1206 10:26:55.403727  365843 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-545880"
	I1206 10:26:55.403731  365843 addons.go:70] Setting default-storageclass=true in profile "addons-545880"
	I1206 10:26:55.403735  365843 addons.go:70] Setting ingress-dns=true in profile "addons-545880"
	I1206 10:26:55.403738  365843 addons.go:70] Setting ingress=true in profile "addons-545880"
	I1206 10:26:55.430347  365843 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-545880"
	I1206 10:26:55.430372  365843 addons.go:239] Setting addon volumesnapshots=true in "addons-545880"
	I1206 10:26:55.431554  365843 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:26:55.433741  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:26:55.430984  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:26:55.430995  365843 addons.go:239] Setting addon storage-provisioner=true in "addons-545880"
	I1206 10:26:55.451763  365843 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:26:55.431005  365843 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-545880"
	I1206 10:26:55.475834  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:26:55.431030  365843 addons.go:239] Setting addon volcano=true in "addons-545880"
	I1206 10:26:55.475977  365843 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:26:55.431043  365843 mustload.go:66] Loading cluster: addons-545880
	I1206 10:26:55.476362  365843 config.go:182] Loaded profile config "addons-545880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:26:55.431106  365843 addons.go:239] Setting addon cloud-spanner=true in "addons-545880"
	I1206 10:26:55.477618  365843 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:26:55.478143  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:26:55.495791  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:26:55.431140  365843 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-545880"
	I1206 10:26:55.443323  365843 addons.go:239] Setting addon ingress-dns=true in "addons-545880"
	I1206 10:26:55.443366  365843 addons.go:239] Setting addon ingress=true in "addons-545880"
	I1206 10:26:55.443467  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:26:55.498157  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:26:55.535563  365843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:26:55.568757  365843 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1206 10:26:55.571838  365843 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1206 10:26:55.571864  365843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1206 10:26:55.571936  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:26:55.584748  365843 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:26:55.588684  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:26:55.595842  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:26:55.597368  365843 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:26:55.597872  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:26:55.611488  365843 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1206 10:26:55.611673  365843 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:26:55.612197  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:26:55.619176  365843 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1206 10:26:55.619340  365843 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1206 10:26:55.619354  365843 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1206 10:26:55.619493  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:26:55.643508  365843 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1206 10:26:55.646784  365843 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1206 10:26:55.646807  365843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1206 10:26:55.646874  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:26:55.678766  365843 out.go:179]   - Using image docker.io/registry:3.0.0
	I1206 10:26:55.681775  365843 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1206 10:26:55.684655  365843 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1206 10:26:55.684682  365843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1206 10:26:55.684765  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:26:55.710808  365843 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1206 10:26:55.716836  365843 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1206 10:26:55.716859  365843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1206 10:26:55.716933  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:26:55.728970  365843 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1206 10:26:55.728998  365843 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1206 10:26:55.729084  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:26:55.746545  365843 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:26:55.750924  365843 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-545880"
	I1206 10:26:55.751045  365843 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:26:55.751808  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:26:55.763890  365843 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1206 10:26:55.787751  365843 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 10:26:55.790610  365843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:26:55.798765  365843 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1206 10:26:55.798785  365843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1206 10:26:55.798847  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:26:55.799111  365843 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:26:55.799137  365843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 10:26:55.799217  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:26:55.788023  365843 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1206 10:26:55.819336  365843 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1206 10:26:55.819697  365843 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1206 10:26:55.823417  365843 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1206 10:26:55.823442  365843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1206 10:26:55.823512  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:26:55.823695  365843 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1206 10:26:55.823705  365843 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1206 10:26:55.823744  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:26:55.854980  365843 addons.go:239] Setting addon default-storageclass=true in "addons-545880"
	I1206 10:26:55.855021  365843 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:26:55.855504  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:26:55.883443  365843 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1206 10:26:55.889126  365843 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1206 10:26:55.892240  365843 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1206 10:26:55.897148  365843 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1206 10:26:55.901241  365843 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1206 10:26:55.902264  365843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:26:55.905454  365843 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1206 10:26:55.926916  365843 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1206 10:26:55.927140  365843 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1206 10:26:55.932348  365843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:26:55.958632  365843 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1206 10:26:55.962195  365843 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1206 10:26:55.964049  365843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:26:55.968399  365843 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1206 10:26:55.968475  365843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1206 10:26:55.968571  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	W1206 10:26:55.969102  365843 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1206 10:26:55.982324  365843 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1206 10:26:55.985132  365843 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1206 10:26:55.985205  365843 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1206 10:26:55.985309  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:26:55.992142  365843 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:26:56.017521  365843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:26:56.027674  365843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:26:56.048289  365843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:26:56.050817  365843 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1206 10:26:56.056548  365843 out.go:179]   - Using image docker.io/busybox:stable
	I1206 10:26:56.059240  365843 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1206 10:26:56.060352  365843 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1206 10:26:56.060380  365843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1206 10:26:56.060452  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:26:56.061394  365843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:26:56.068100  365843 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1206 10:26:56.068126  365843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1206 10:26:56.068197  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:26:56.068854  365843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:26:56.082814  365843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:26:56.136276  365843 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 10:26:56.136304  365843 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 10:26:56.136361  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:26:56.154518  365843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:26:56.167592  365843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:26:56.180964  365843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	W1206 10:26:56.182126  365843 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1206 10:26:56.182156  365843 retry.go:31] will retry after 311.449807ms: ssh: handshake failed: EOF
	I1206 10:26:56.190252  365843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:26:56.206629  365843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	W1206 10:26:56.207732  365843 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1206 10:26:56.207754  365843 retry.go:31] will retry after 249.762898ms: ssh: handshake failed: EOF
	W1206 10:26:56.494665  365843 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1206 10:26:56.494693  365843 retry.go:31] will retry after 285.993596ms: ssh: handshake failed: EOF
	I1206 10:26:56.586068  365843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1206 10:26:56.760971  365843 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1206 10:26:56.761035  365843 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1206 10:26:56.768421  365843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:26:56.788122  365843 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1206 10:26:56.788147  365843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1206 10:26:56.824696  365843 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1206 10:26:56.824723  365843 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1206 10:26:56.840892  365843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1206 10:26:56.876257  365843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1206 10:26:56.882175  365843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1206 10:26:56.891786  365843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1206 10:26:56.917826  365843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1206 10:26:56.979846  365843 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1206 10:26:56.979918  365843 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1206 10:26:57.012253  365843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1206 10:26:57.019122  365843 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1206 10:26:57.019196  365843 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1206 10:26:57.028169  365843 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1206 10:26:57.028241  365843 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1206 10:26:57.048027  365843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:26:57.112654  365843 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1206 10:26:57.112725  365843 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1206 10:26:57.115104  365843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1206 10:26:57.160437  365843 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 10:26:57.160535  365843 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1206 10:26:57.207345  365843 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1206 10:26:57.207367  365843 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1206 10:26:57.208652  365843 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1206 10:26:57.208668  365843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1206 10:26:57.343696  365843 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1206 10:26:57.343771  365843 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1206 10:26:57.347691  365843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1206 10:26:57.355589  365843 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1206 10:26:57.355614  365843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1206 10:26:57.362689  365843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1206 10:26:57.416618  365843 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.424433441s)
	I1206 10:26:57.417381  365843 node_ready.go:35] waiting up to 6m0s for node "addons-545880" to be "Ready" ...
	I1206 10:26:57.418126  365843 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.598991437s)
	I1206 10:26:57.418152  365843 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1206 10:26:57.610704  365843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1206 10:26:57.620885  365843 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1206 10:26:57.620918  365843 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1206 10:26:57.702495  365843 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1206 10:26:57.702530  365843 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1206 10:26:57.854193  365843 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 10:26:57.854226  365843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1206 10:26:57.922717  365843 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-545880" context rescaled to 1 replicas
	I1206 10:26:57.957226  365843 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.371118161s)
	I1206 10:26:58.069786  365843 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1206 10:26:58.069829  365843 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1206 10:26:58.117240  365843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 10:26:58.283663  365843 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1206 10:26:58.283745  365843 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1206 10:26:58.478764  365843 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1206 10:26:58.478840  365843 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1206 10:26:58.653255  365843 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1206 10:26:58.653324  365843 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1206 10:26:58.867398  365843 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1206 10:26:58.867467  365843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1206 10:26:58.992932  365843 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1206 10:26:58.992960  365843 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1206 10:26:59.212393  365843 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1206 10:26:59.212419  365843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1206 10:26:59.388709  365843 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1206 10:26:59.388778  365843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	W1206 10:26:59.424927  365843 node_ready.go:57] node "addons-545880" has "Ready":"False" status (will retry)
	I1206 10:26:59.586206  365843 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1206 10:26:59.586277  365843 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1206 10:26:59.859237  365843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1206 10:27:00.010501  365843 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.242038625s)
	I1206 10:27:00.010640  365843 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.169722597s)
	I1206 10:27:00.651414  365843 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.775032579s)
	I1206 10:27:00.932952  365843 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.050700954s)
	I1206 10:27:00.933043  365843 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.041185196s)
	I1206 10:27:00.933101  365843 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.015203792s)
	W1206 10:27:01.434232  365843 node_ready.go:57] node "addons-545880" has "Ready":"False" status (will retry)
	I1206 10:27:01.922166  365843 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.909822613s)
	I1206 10:27:01.922204  365843 addons.go:495] Verifying addon ingress=true in "addons-545880"
	I1206 10:27:01.922354  365843 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.87425312s)
	I1206 10:27:01.922611  365843 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.807442642s)
	I1206 10:27:01.922705  365843 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.574935078s)
	I1206 10:27:01.922964  365843 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.560201575s)
	I1206 10:27:01.922985  365843 addons.go:495] Verifying addon metrics-server=true in "addons-545880"
	I1206 10:27:01.923039  365843 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.312288854s)
	I1206 10:27:01.923090  365843 addons.go:495] Verifying addon registry=true in "addons-545880"
	I1206 10:27:01.923342  365843 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.806045083s)
	W1206 10:27:01.923537  365843 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1206 10:27:01.923565  365843 retry.go:31] will retry after 251.579721ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1206 10:27:01.925812  365843 out.go:179] * Verifying ingress addon...
	I1206 10:27:01.927764  365843 out.go:179] * Verifying registry addon...
	I1206 10:27:01.927788  365843 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-545880 service yakd-dashboard -n yakd-dashboard
	
	I1206 10:27:01.930607  365843 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1206 10:27:01.930794  365843 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1206 10:27:01.941322  365843 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1206 10:27:01.941348  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:01.941497  365843 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1206 10:27:01.941512  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:02.176003  365843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1206 10:27:02.238251  365843 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.378903705s)
	I1206 10:27:02.238289  365843 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-545880"
	I1206 10:27:02.241401  365843 out.go:179] * Verifying csi-hostpath-driver addon...
	I1206 10:27:02.245069  365843 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1206 10:27:02.279736  365843 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1206 10:27:02.279755  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:02.435652  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:02.436011  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:02.748984  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:02.934270  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:02.934467  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:03.249032  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:03.359230  365843 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1206 10:27:03.359330  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:27:03.377198  365843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:27:03.436239  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:03.436359  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:03.502044  365843 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1206 10:27:03.516804  365843 addons.go:239] Setting addon gcp-auth=true in "addons-545880"
	I1206 10:27:03.516854  365843 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:27:03.517333  365843 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:27:03.534900  365843 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1206 10:27:03.534980  365843 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:27:03.553572  365843 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:27:03.748216  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1206 10:27:03.921118  365843 node_ready.go:57] node "addons-545880" has "Ready":"False" status (will retry)
	I1206 10:27:03.934485  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:03.934630  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:04.248943  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:04.434274  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:04.434371  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:04.749397  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:04.897146  365843 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.721089362s)
	I1206 10:27:04.897209  365843 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.362289931s)
	I1206 10:27:04.900002  365843 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1206 10:27:04.903019  365843 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1206 10:27:04.905888  365843 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1206 10:27:04.905908  365843 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1206 10:27:04.920462  365843 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1206 10:27:04.920484  365843 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1206 10:27:04.936908  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:04.937112  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:04.939689  365843 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1206 10:27:04.939710  365843 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1206 10:27:04.955835  365843 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1206 10:27:05.249072  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:05.452558  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:05.456610  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:05.473920  365843 addons.go:495] Verifying addon gcp-auth=true in "addons-545880"
	I1206 10:27:05.476924  365843 out.go:179] * Verifying gcp-auth addon...
	I1206 10:27:05.480596  365843 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1206 10:27:05.489823  365843 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1206 10:27:05.489849  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:05.748323  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:05.934950  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:05.935273  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:05.984076  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:06.247979  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1206 10:27:06.421093  365843 node_ready.go:57] node "addons-545880" has "Ready":"False" status (will retry)
	I1206 10:27:06.434943  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:06.435353  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:06.483932  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:06.747726  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:06.933708  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:06.933901  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:06.983851  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:07.249228  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:07.434597  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:07.434841  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:07.483900  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:07.748033  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:07.933570  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:07.933764  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:07.983926  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:08.248934  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:08.434143  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:08.434664  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:08.484758  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:08.748943  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1206 10:27:08.921015  365843 node_ready.go:57] node "addons-545880" has "Ready":"False" status (will retry)
	I1206 10:27:08.934375  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:08.934792  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:08.983903  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:09.248357  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:09.434456  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:09.434763  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:09.483490  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:09.748568  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:09.933444  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:09.934544  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:09.984552  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:10.248353  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:10.434619  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:10.434841  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:10.483784  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:10.748601  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:10.934499  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:10.934625  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:10.984857  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:11.248408  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1206 10:27:11.420004  365843 node_ready.go:57] node "addons-545880" has "Ready":"False" status (will retry)
	I1206 10:27:11.434083  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:11.434303  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:11.484141  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:11.748333  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:11.934686  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:11.934807  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:11.983730  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:12.248655  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:12.434845  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:12.435069  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:12.483877  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:12.749100  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:12.934061  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:12.934522  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:12.984323  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:13.248812  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1206 10:27:13.421621  365843 node_ready.go:57] node "addons-545880" has "Ready":"False" status (will retry)
	I1206 10:27:13.434686  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:13.434822  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:13.483746  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:13.748720  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:13.934837  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:13.934946  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:13.983730  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:14.249064  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:14.434883  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:14.435042  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:14.483771  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:14.749339  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:14.934552  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:14.935126  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:14.983990  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:15.249381  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:15.434396  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:15.434735  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:15.484428  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:15.748304  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1206 10:27:15.920164  365843 node_ready.go:57] node "addons-545880" has "Ready":"False" status (will retry)
	I1206 10:27:15.934182  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:15.934541  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:15.984351  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:16.248464  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:16.435014  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:16.435198  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:16.484236  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:16.748268  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:16.934554  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:16.934686  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:16.984560  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:17.249182  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:17.434572  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:17.434763  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:17.483589  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:17.748529  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1206 10:27:17.922103  365843 node_ready.go:57] node "addons-545880" has "Ready":"False" status (will retry)
	I1206 10:27:17.934434  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:17.934692  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:17.983497  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:18.248255  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:18.434739  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:18.434934  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:18.483628  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:18.748605  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:18.935027  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:18.935225  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:18.984249  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:19.248576  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:19.434616  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:19.434773  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:19.483837  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:19.749063  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:19.934496  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:19.934837  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:19.983729  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:20.248985  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1206 10:27:20.420941  365843 node_ready.go:57] node "addons-545880" has "Ready":"False" status (will retry)
	I1206 10:27:20.433748  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:20.433957  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:20.483844  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:20.748769  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:20.934139  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:20.934470  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:20.984674  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:21.249837  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:21.434958  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:21.435173  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:21.484191  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:21.748093  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:21.934652  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:21.934838  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:21.993184  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:22.248416  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:22.433933  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:22.434338  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:22.484284  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:22.748485  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1206 10:27:22.920588  365843 node_ready.go:57] node "addons-545880" has "Ready":"False" status (will retry)
	I1206 10:27:22.934808  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:22.934907  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:22.983779  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:23.249578  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:23.434486  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:23.434787  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:23.484427  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:23.748705  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:23.934509  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:23.934710  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:23.983840  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:24.249195  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:24.434014  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:24.434076  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:24.483924  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:24.748074  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1206 10:27:24.921208  365843 node_ready.go:57] node "addons-545880" has "Ready":"False" status (will retry)
	I1206 10:27:24.934530  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:24.934547  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:24.984107  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:25.249046  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:25.434828  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:25.435075  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:25.483756  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:25.748745  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:25.935136  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:25.935196  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:25.984181  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:26.248296  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:26.433786  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:26.434138  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:26.484234  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:26.748152  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:26.934697  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:26.935011  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:26.983483  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:27.248385  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1206 10:27:27.420523  365843 node_ready.go:57] node "addons-545880" has "Ready":"False" status (will retry)
	I1206 10:27:27.434651  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:27.435148  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:27.483995  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:27.748307  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:27.934598  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:27.934832  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:27.984124  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:28.248313  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:28.435114  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:28.435269  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:28.484342  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:28.748896  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:28.935141  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:28.935290  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:28.984512  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:29.248878  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1206 10:27:29.420957  365843 node_ready.go:57] node "addons-545880" has "Ready":"False" status (will retry)
	I1206 10:27:29.433991  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:29.434390  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:29.484378  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:29.748373  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:29.934838  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:29.935206  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:29.983939  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:30.249005  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:30.434004  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:30.434385  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:30.484311  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:30.748221  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:30.934227  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:30.934812  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:30.983889  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:31.249021  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:31.434660  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:31.434833  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:31.483788  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:31.748843  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1206 10:27:31.921279  365843 node_ready.go:57] node "addons-545880" has "Ready":"False" status (will retry)
	I1206 10:27:31.934423  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:31.934865  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:31.983795  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:32.248995  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:32.434137  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:32.434252  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:32.483937  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:32.749029  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:32.934174  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:32.934844  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:32.984056  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:33.248603  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:33.435077  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:33.435454  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:33.484283  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:33.748250  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:33.934235  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:33.934614  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:33.983519  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:34.248760  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1206 10:27:34.420601  365843 node_ready.go:57] node "addons-545880" has "Ready":"False" status (will retry)
	I1206 10:27:34.434781  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:34.435321  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:34.484053  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:34.748195  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:34.934004  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:34.934146  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:34.984176  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:35.248291  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:35.434258  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:35.434406  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:35.484087  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:35.748216  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:35.934664  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:35.935078  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:35.984144  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:36.248172  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1206 10:27:36.421451  365843 node_ready.go:57] node "addons-545880" has "Ready":"False" status (will retry)
	I1206 10:27:36.434555  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:36.434798  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:36.483758  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:36.756741  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:36.932669  365843 node_ready.go:49] node "addons-545880" is "Ready"
	I1206 10:27:36.932708  365843 node_ready.go:38] duration metric: took 39.515291987s for node "addons-545880" to be "Ready" ...
	I1206 10:27:36.932739  365843 api_server.go:52] waiting for apiserver process to appear ...
	I1206 10:27:36.932824  365843 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:27:36.948827  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:36.949421  365843 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1206 10:27:36.949443  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:36.950323  365843 api_server.go:72] duration metric: took 41.55469851s to wait for apiserver process to appear ...
	I1206 10:27:36.950350  365843 api_server.go:88] waiting for apiserver healthz status ...
	I1206 10:27:36.950376  365843 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1206 10:27:36.959790  365843 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1206 10:27:36.966282  365843 api_server.go:141] control plane version: v1.34.2
	I1206 10:27:36.966316  365843 api_server.go:131] duration metric: took 15.950523ms to wait for apiserver health ...
	I1206 10:27:36.966326  365843 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 10:27:36.979075  365843 system_pods.go:59] 19 kube-system pods found
	I1206 10:27:36.979119  365843 system_pods.go:61] "coredns-66bc5c9577-mf79v" [9fb9121c-9008-465a-bb08-cf1bfb566a86] Pending
	I1206 10:27:36.979126  365843 system_pods.go:61] "csi-hostpath-attacher-0" [c0a59cbc-3f64-4410-8abf-b27d4ffe5eca] Pending
	I1206 10:27:36.979130  365843 system_pods.go:61] "csi-hostpath-resizer-0" [0c654e88-775b-4aa4-a773-413240f38d73] Pending
	I1206 10:27:36.979169  365843 system_pods.go:61] "csi-hostpathplugin-t892t" [e990c5a0-fb3d-4475-b96a-847521762ef2] Pending
	I1206 10:27:36.979195  365843 system_pods.go:61] "etcd-addons-545880" [54accd8f-75ed-493f-adf4-e8b3bdfc9ee7] Running
	I1206 10:27:36.979206  365843 system_pods.go:61] "kindnet-fmxlt" [1cdbf4b9-d4ed-42da-be4b-c7eb54a81d3a] Running
	I1206 10:27:36.979212  365843 system_pods.go:61] "kube-apiserver-addons-545880" [a9526e71-012f-47ce-a364-b6415e72b9d4] Running
	I1206 10:27:36.979216  365843 system_pods.go:61] "kube-controller-manager-addons-545880" [2b34ecb8-3436-46c0-b985-8fb08a699157] Running
	I1206 10:27:36.979227  365843 system_pods.go:61] "kube-ingress-dns-minikube" [5b393164-cddd-4102-8b6e-d024ce6bcb4c] Pending
	I1206 10:27:36.979231  365843 system_pods.go:61] "kube-proxy-9k5w7" [7b4d65bc-d662-4175-864c-c3d1e6e69e31] Running
	I1206 10:27:36.979235  365843 system_pods.go:61] "kube-scheduler-addons-545880" [9a2c1865-c2cd-4843-a2fb-029ad83e8827] Running
	I1206 10:27:36.979238  365843 system_pods.go:61] "metrics-server-85b7d694d7-6j9l7" [c7953423-aab8-4805-bc6f-57aac150e43a] Pending
	I1206 10:27:36.979242  365843 system_pods.go:61] "nvidia-device-plugin-daemonset-6sbmv" [255b0abe-864c-4ff8-9125-e4ffb052005c] Pending
	I1206 10:27:36.979252  365843 system_pods.go:61] "registry-6b586f9694-lrjzv" [ccdcbcbc-0689-4862-8dd1-415689504519] Pending
	I1206 10:27:36.979274  365843 system_pods.go:61] "registry-creds-764b6fb674-nrw5g" [db673d4c-19b0-4f94-bf04-a6cbd90211bf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 10:27:36.979285  365843 system_pods.go:61] "registry-proxy-j4zp9" [7cd8d746-23f2-448e-a9a6-8281f58e0d70] Pending
	I1206 10:27:36.979290  365843 system_pods.go:61] "snapshot-controller-7d9fbc56b8-4trlb" [316c179e-ac90-449c-9f3e-ffcb1016f2ff] Pending
	I1206 10:27:36.979295  365843 system_pods.go:61] "snapshot-controller-7d9fbc56b8-ncslf" [bb9699cb-e616-4ee5-a31e-886ed8794ce4] Pending
	I1206 10:27:36.979312  365843 system_pods.go:61] "storage-provisioner" [5f75f24f-7f45-4a01-a603-961b4f05ca09] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 10:27:36.979326  365843 system_pods.go:74] duration metric: took 12.9924ms to wait for pod list to return data ...
	I1206 10:27:36.979350  365843 default_sa.go:34] waiting for default service account to be created ...
	I1206 10:27:36.984332  365843 default_sa.go:45] found service account: "default"
	I1206 10:27:36.984370  365843 default_sa.go:55] duration metric: took 5.014042ms for default service account to be created ...
	I1206 10:27:36.984381  365843 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 10:27:37.004677  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:37.005945  365843 system_pods.go:86] 19 kube-system pods found
	I1206 10:27:37.005992  365843 system_pods.go:89] "coredns-66bc5c9577-mf79v" [9fb9121c-9008-465a-bb08-cf1bfb566a86] Pending
	I1206 10:27:37.006003  365843 system_pods.go:89] "csi-hostpath-attacher-0" [c0a59cbc-3f64-4410-8abf-b27d4ffe5eca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 10:27:37.006009  365843 system_pods.go:89] "csi-hostpath-resizer-0" [0c654e88-775b-4aa4-a773-413240f38d73] Pending
	I1206 10:27:37.006015  365843 system_pods.go:89] "csi-hostpathplugin-t892t" [e990c5a0-fb3d-4475-b96a-847521762ef2] Pending
	I1206 10:27:37.006018  365843 system_pods.go:89] "etcd-addons-545880" [54accd8f-75ed-493f-adf4-e8b3bdfc9ee7] Running
	I1206 10:27:37.008531  365843 system_pods.go:89] "kindnet-fmxlt" [1cdbf4b9-d4ed-42da-be4b-c7eb54a81d3a] Running
	I1206 10:27:37.008593  365843 system_pods.go:89] "kube-apiserver-addons-545880" [a9526e71-012f-47ce-a364-b6415e72b9d4] Running
	I1206 10:27:37.008600  365843 system_pods.go:89] "kube-controller-manager-addons-545880" [2b34ecb8-3436-46c0-b985-8fb08a699157] Running
	I1206 10:27:37.008608  365843 system_pods.go:89] "kube-ingress-dns-minikube" [5b393164-cddd-4102-8b6e-d024ce6bcb4c] Pending
	I1206 10:27:37.008612  365843 system_pods.go:89] "kube-proxy-9k5w7" [7b4d65bc-d662-4175-864c-c3d1e6e69e31] Running
	I1206 10:27:37.008617  365843 system_pods.go:89] "kube-scheduler-addons-545880" [9a2c1865-c2cd-4843-a2fb-029ad83e8827] Running
	I1206 10:27:37.008622  365843 system_pods.go:89] "metrics-server-85b7d694d7-6j9l7" [c7953423-aab8-4805-bc6f-57aac150e43a] Pending
	I1206 10:27:37.008655  365843 system_pods.go:89] "nvidia-device-plugin-daemonset-6sbmv" [255b0abe-864c-4ff8-9125-e4ffb052005c] Pending
	I1206 10:27:37.008669  365843 system_pods.go:89] "registry-6b586f9694-lrjzv" [ccdcbcbc-0689-4862-8dd1-415689504519] Pending
	I1206 10:27:37.008679  365843 system_pods.go:89] "registry-creds-764b6fb674-nrw5g" [db673d4c-19b0-4f94-bf04-a6cbd90211bf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 10:27:37.008689  365843 system_pods.go:89] "registry-proxy-j4zp9" [7cd8d746-23f2-448e-a9a6-8281f58e0d70] Pending
	I1206 10:27:37.008698  365843 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4trlb" [316c179e-ac90-449c-9f3e-ffcb1016f2ff] Pending
	I1206 10:27:37.008722  365843 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ncslf" [bb9699cb-e616-4ee5-a31e-886ed8794ce4] Pending
	I1206 10:27:37.008738  365843 system_pods.go:89] "storage-provisioner" [5f75f24f-7f45-4a01-a603-961b4f05ca09] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 10:27:37.008775  365843 retry.go:31] will retry after 277.323708ms: missing components: kube-dns
	I1206 10:27:37.253451  365843 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1206 10:27:37.253503  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:37.294581  365843 system_pods.go:86] 19 kube-system pods found
	I1206 10:27:37.294637  365843 system_pods.go:89] "coredns-66bc5c9577-mf79v" [9fb9121c-9008-465a-bb08-cf1bfb566a86] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:27:37.294665  365843 system_pods.go:89] "csi-hostpath-attacher-0" [c0a59cbc-3f64-4410-8abf-b27d4ffe5eca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 10:27:37.294688  365843 system_pods.go:89] "csi-hostpath-resizer-0" [0c654e88-775b-4aa4-a773-413240f38d73] Pending
	I1206 10:27:37.294694  365843 system_pods.go:89] "csi-hostpathplugin-t892t" [e990c5a0-fb3d-4475-b96a-847521762ef2] Pending
	I1206 10:27:37.294699  365843 system_pods.go:89] "etcd-addons-545880" [54accd8f-75ed-493f-adf4-e8b3bdfc9ee7] Running
	I1206 10:27:37.294704  365843 system_pods.go:89] "kindnet-fmxlt" [1cdbf4b9-d4ed-42da-be4b-c7eb54a81d3a] Running
	I1206 10:27:37.294715  365843 system_pods.go:89] "kube-apiserver-addons-545880" [a9526e71-012f-47ce-a364-b6415e72b9d4] Running
	I1206 10:27:37.294720  365843 system_pods.go:89] "kube-controller-manager-addons-545880" [2b34ecb8-3436-46c0-b985-8fb08a699157] Running
	I1206 10:27:37.294725  365843 system_pods.go:89] "kube-ingress-dns-minikube" [5b393164-cddd-4102-8b6e-d024ce6bcb4c] Pending
	I1206 10:27:37.294747  365843 system_pods.go:89] "kube-proxy-9k5w7" [7b4d65bc-d662-4175-864c-c3d1e6e69e31] Running
	I1206 10:27:37.294759  365843 system_pods.go:89] "kube-scheduler-addons-545880" [9a2c1865-c2cd-4843-a2fb-029ad83e8827] Running
	I1206 10:27:37.294763  365843 system_pods.go:89] "metrics-server-85b7d694d7-6j9l7" [c7953423-aab8-4805-bc6f-57aac150e43a] Pending
	I1206 10:27:37.294767  365843 system_pods.go:89] "nvidia-device-plugin-daemonset-6sbmv" [255b0abe-864c-4ff8-9125-e4ffb052005c] Pending
	I1206 10:27:37.294783  365843 system_pods.go:89] "registry-6b586f9694-lrjzv" [ccdcbcbc-0689-4862-8dd1-415689504519] Pending
	I1206 10:27:37.294797  365843 system_pods.go:89] "registry-creds-764b6fb674-nrw5g" [db673d4c-19b0-4f94-bf04-a6cbd90211bf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 10:27:37.294803  365843 system_pods.go:89] "registry-proxy-j4zp9" [7cd8d746-23f2-448e-a9a6-8281f58e0d70] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 10:27:37.294833  365843 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4trlb" [316c179e-ac90-449c-9f3e-ffcb1016f2ff] Pending
	I1206 10:27:37.294846  365843 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ncslf" [bb9699cb-e616-4ee5-a31e-886ed8794ce4] Pending
	I1206 10:27:37.294851  365843 system_pods.go:89] "storage-provisioner" [5f75f24f-7f45-4a01-a603-961b4f05ca09] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 10:27:37.294885  365843 retry.go:31] will retry after 262.575521ms: missing components: kube-dns
	I1206 10:27:37.446980  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:37.452621  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:37.496589  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:37.564517  365843 system_pods.go:86] 19 kube-system pods found
	I1206 10:27:37.564553  365843 system_pods.go:89] "coredns-66bc5c9577-mf79v" [9fb9121c-9008-465a-bb08-cf1bfb566a86] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:27:37.564570  365843 system_pods.go:89] "csi-hostpath-attacher-0" [c0a59cbc-3f64-4410-8abf-b27d4ffe5eca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 10:27:37.564595  365843 system_pods.go:89] "csi-hostpath-resizer-0" [0c654e88-775b-4aa4-a773-413240f38d73] Pending
	I1206 10:27:37.564609  365843 system_pods.go:89] "csi-hostpathplugin-t892t" [e990c5a0-fb3d-4475-b96a-847521762ef2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1206 10:27:37.564613  365843 system_pods.go:89] "etcd-addons-545880" [54accd8f-75ed-493f-adf4-e8b3bdfc9ee7] Running
	I1206 10:27:37.564619  365843 system_pods.go:89] "kindnet-fmxlt" [1cdbf4b9-d4ed-42da-be4b-c7eb54a81d3a] Running
	I1206 10:27:37.564629  365843 system_pods.go:89] "kube-apiserver-addons-545880" [a9526e71-012f-47ce-a364-b6415e72b9d4] Running
	I1206 10:27:37.564647  365843 system_pods.go:89] "kube-controller-manager-addons-545880" [2b34ecb8-3436-46c0-b985-8fb08a699157] Running
	I1206 10:27:37.564663  365843 system_pods.go:89] "kube-ingress-dns-minikube" [5b393164-cddd-4102-8b6e-d024ce6bcb4c] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 10:27:37.564667  365843 system_pods.go:89] "kube-proxy-9k5w7" [7b4d65bc-d662-4175-864c-c3d1e6e69e31] Running
	I1206 10:27:37.564685  365843 system_pods.go:89] "kube-scheduler-addons-545880" [9a2c1865-c2cd-4843-a2fb-029ad83e8827] Running
	I1206 10:27:37.564699  365843 system_pods.go:89] "metrics-server-85b7d694d7-6j9l7" [c7953423-aab8-4805-bc6f-57aac150e43a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 10:27:37.564707  365843 system_pods.go:89] "nvidia-device-plugin-daemonset-6sbmv" [255b0abe-864c-4ff8-9125-e4ffb052005c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1206 10:27:37.564732  365843 system_pods.go:89] "registry-6b586f9694-lrjzv" [ccdcbcbc-0689-4862-8dd1-415689504519] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 10:27:37.564744  365843 system_pods.go:89] "registry-creds-764b6fb674-nrw5g" [db673d4c-19b0-4f94-bf04-a6cbd90211bf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 10:27:37.564752  365843 system_pods.go:89] "registry-proxy-j4zp9" [7cd8d746-23f2-448e-a9a6-8281f58e0d70] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 10:27:37.564762  365843 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4trlb" [316c179e-ac90-449c-9f3e-ffcb1016f2ff] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 10:27:37.564769  365843 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ncslf" [bb9699cb-e616-4ee5-a31e-886ed8794ce4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 10:27:37.564782  365843 system_pods.go:89] "storage-provisioner" [5f75f24f-7f45-4a01-a603-961b4f05ca09] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 10:27:37.564812  365843 retry.go:31] will retry after 428.098269ms: missing components: kube-dns
	I1206 10:27:37.759619  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:37.939010  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:37.939156  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:38.040572  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:38.041546  365843 system_pods.go:86] 19 kube-system pods found
	I1206 10:27:38.041580  365843 system_pods.go:89] "coredns-66bc5c9577-mf79v" [9fb9121c-9008-465a-bb08-cf1bfb566a86] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:27:38.041590  365843 system_pods.go:89] "csi-hostpath-attacher-0" [c0a59cbc-3f64-4410-8abf-b27d4ffe5eca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 10:27:38.041636  365843 system_pods.go:89] "csi-hostpath-resizer-0" [0c654e88-775b-4aa4-a773-413240f38d73] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1206 10:27:38.041645  365843 system_pods.go:89] "csi-hostpathplugin-t892t" [e990c5a0-fb3d-4475-b96a-847521762ef2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1206 10:27:38.041657  365843 system_pods.go:89] "etcd-addons-545880" [54accd8f-75ed-493f-adf4-e8b3bdfc9ee7] Running
	I1206 10:27:38.041689  365843 system_pods.go:89] "kindnet-fmxlt" [1cdbf4b9-d4ed-42da-be4b-c7eb54a81d3a] Running
	I1206 10:27:38.041701  365843 system_pods.go:89] "kube-apiserver-addons-545880" [a9526e71-012f-47ce-a364-b6415e72b9d4] Running
	I1206 10:27:38.041707  365843 system_pods.go:89] "kube-controller-manager-addons-545880" [2b34ecb8-3436-46c0-b985-8fb08a699157] Running
	I1206 10:27:38.041720  365843 system_pods.go:89] "kube-ingress-dns-minikube" [5b393164-cddd-4102-8b6e-d024ce6bcb4c] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 10:27:38.041731  365843 system_pods.go:89] "kube-proxy-9k5w7" [7b4d65bc-d662-4175-864c-c3d1e6e69e31] Running
	I1206 10:27:38.041738  365843 system_pods.go:89] "kube-scheduler-addons-545880" [9a2c1865-c2cd-4843-a2fb-029ad83e8827] Running
	I1206 10:27:38.041766  365843 system_pods.go:89] "metrics-server-85b7d694d7-6j9l7" [c7953423-aab8-4805-bc6f-57aac150e43a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 10:27:38.041782  365843 system_pods.go:89] "nvidia-device-plugin-daemonset-6sbmv" [255b0abe-864c-4ff8-9125-e4ffb052005c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1206 10:27:38.041794  365843 system_pods.go:89] "registry-6b586f9694-lrjzv" [ccdcbcbc-0689-4862-8dd1-415689504519] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 10:27:38.041807  365843 system_pods.go:89] "registry-creds-764b6fb674-nrw5g" [db673d4c-19b0-4f94-bf04-a6cbd90211bf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 10:27:38.041818  365843 system_pods.go:89] "registry-proxy-j4zp9" [7cd8d746-23f2-448e-a9a6-8281f58e0d70] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 10:27:38.041825  365843 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4trlb" [316c179e-ac90-449c-9f3e-ffcb1016f2ff] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 10:27:38.041865  365843 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ncslf" [bb9699cb-e616-4ee5-a31e-886ed8794ce4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 10:27:38.041883  365843 system_pods.go:89] "storage-provisioner" [5f75f24f-7f45-4a01-a603-961b4f05ca09] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 10:27:38.041908  365843 retry.go:31] will retry after 474.893963ms: missing components: kube-dns
	I1206 10:27:38.248288  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:38.435171  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:38.435305  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:38.484039  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:38.520834  365843 system_pods.go:86] 19 kube-system pods found
	I1206 10:27:38.520877  365843 system_pods.go:89] "coredns-66bc5c9577-mf79v" [9fb9121c-9008-465a-bb08-cf1bfb566a86] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 10:27:38.520886  365843 system_pods.go:89] "csi-hostpath-attacher-0" [c0a59cbc-3f64-4410-8abf-b27d4ffe5eca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 10:27:38.520894  365843 system_pods.go:89] "csi-hostpath-resizer-0" [0c654e88-775b-4aa4-a773-413240f38d73] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1206 10:27:38.520941  365843 system_pods.go:89] "csi-hostpathplugin-t892t" [e990c5a0-fb3d-4475-b96a-847521762ef2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1206 10:27:38.520956  365843 system_pods.go:89] "etcd-addons-545880" [54accd8f-75ed-493f-adf4-e8b3bdfc9ee7] Running
	I1206 10:27:38.520963  365843 system_pods.go:89] "kindnet-fmxlt" [1cdbf4b9-d4ed-42da-be4b-c7eb54a81d3a] Running
	I1206 10:27:38.520968  365843 system_pods.go:89] "kube-apiserver-addons-545880" [a9526e71-012f-47ce-a364-b6415e72b9d4] Running
	I1206 10:27:38.520976  365843 system_pods.go:89] "kube-controller-manager-addons-545880" [2b34ecb8-3436-46c0-b985-8fb08a699157] Running
	I1206 10:27:38.520983  365843 system_pods.go:89] "kube-ingress-dns-minikube" [5b393164-cddd-4102-8b6e-d024ce6bcb4c] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 10:27:38.520990  365843 system_pods.go:89] "kube-proxy-9k5w7" [7b4d65bc-d662-4175-864c-c3d1e6e69e31] Running
	I1206 10:27:38.521016  365843 system_pods.go:89] "kube-scheduler-addons-545880" [9a2c1865-c2cd-4843-a2fb-029ad83e8827] Running
	I1206 10:27:38.521022  365843 system_pods.go:89] "metrics-server-85b7d694d7-6j9l7" [c7953423-aab8-4805-bc6f-57aac150e43a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 10:27:38.521035  365843 system_pods.go:89] "nvidia-device-plugin-daemonset-6sbmv" [255b0abe-864c-4ff8-9125-e4ffb052005c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1206 10:27:38.521042  365843 system_pods.go:89] "registry-6b586f9694-lrjzv" [ccdcbcbc-0689-4862-8dd1-415689504519] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 10:27:38.521051  365843 system_pods.go:89] "registry-creds-764b6fb674-nrw5g" [db673d4c-19b0-4f94-bf04-a6cbd90211bf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 10:27:38.521062  365843 system_pods.go:89] "registry-proxy-j4zp9" [7cd8d746-23f2-448e-a9a6-8281f58e0d70] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 10:27:38.521084  365843 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4trlb" [316c179e-ac90-449c-9f3e-ffcb1016f2ff] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 10:27:38.521101  365843 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ncslf" [bb9699cb-e616-4ee5-a31e-886ed8794ce4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 10:27:38.521109  365843 system_pods.go:89] "storage-provisioner" [5f75f24f-7f45-4a01-a603-961b4f05ca09] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1206 10:27:38.521129  365843 retry.go:31] will retry after 540.417916ms: missing components: kube-dns
	I1206 10:27:38.748507  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:38.935615  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:38.936021  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:38.984334  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:39.066417  365843 system_pods.go:86] 19 kube-system pods found
	I1206 10:27:39.066453  365843 system_pods.go:89] "coredns-66bc5c9577-mf79v" [9fb9121c-9008-465a-bb08-cf1bfb566a86] Running
	I1206 10:27:39.066464  365843 system_pods.go:89] "csi-hostpath-attacher-0" [c0a59cbc-3f64-4410-8abf-b27d4ffe5eca] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1206 10:27:39.066471  365843 system_pods.go:89] "csi-hostpath-resizer-0" [0c654e88-775b-4aa4-a773-413240f38d73] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1206 10:27:39.066527  365843 system_pods.go:89] "csi-hostpathplugin-t892t" [e990c5a0-fb3d-4475-b96a-847521762ef2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1206 10:27:39.066533  365843 system_pods.go:89] "etcd-addons-545880" [54accd8f-75ed-493f-adf4-e8b3bdfc9ee7] Running
	I1206 10:27:39.066538  365843 system_pods.go:89] "kindnet-fmxlt" [1cdbf4b9-d4ed-42da-be4b-c7eb54a81d3a] Running
	I1206 10:27:39.066548  365843 system_pods.go:89] "kube-apiserver-addons-545880" [a9526e71-012f-47ce-a364-b6415e72b9d4] Running
	I1206 10:27:39.066552  365843 system_pods.go:89] "kube-controller-manager-addons-545880" [2b34ecb8-3436-46c0-b985-8fb08a699157] Running
	I1206 10:27:39.066567  365843 system_pods.go:89] "kube-ingress-dns-minikube" [5b393164-cddd-4102-8b6e-d024ce6bcb4c] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1206 10:27:39.066573  365843 system_pods.go:89] "kube-proxy-9k5w7" [7b4d65bc-d662-4175-864c-c3d1e6e69e31] Running
	I1206 10:27:39.066582  365843 system_pods.go:89] "kube-scheduler-addons-545880" [9a2c1865-c2cd-4843-a2fb-029ad83e8827] Running
	I1206 10:27:39.066588  365843 system_pods.go:89] "metrics-server-85b7d694d7-6j9l7" [c7953423-aab8-4805-bc6f-57aac150e43a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1206 10:27:39.066595  365843 system_pods.go:89] "nvidia-device-plugin-daemonset-6sbmv" [255b0abe-864c-4ff8-9125-e4ffb052005c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1206 10:27:39.066607  365843 system_pods.go:89] "registry-6b586f9694-lrjzv" [ccdcbcbc-0689-4862-8dd1-415689504519] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1206 10:27:39.066613  365843 system_pods.go:89] "registry-creds-764b6fb674-nrw5g" [db673d4c-19b0-4f94-bf04-a6cbd90211bf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1206 10:27:39.066623  365843 system_pods.go:89] "registry-proxy-j4zp9" [7cd8d746-23f2-448e-a9a6-8281f58e0d70] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1206 10:27:39.066630  365843 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4trlb" [316c179e-ac90-449c-9f3e-ffcb1016f2ff] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 10:27:39.066640  365843 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ncslf" [bb9699cb-e616-4ee5-a31e-886ed8794ce4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1206 10:27:39.066644  365843 system_pods.go:89] "storage-provisioner" [5f75f24f-7f45-4a01-a603-961b4f05ca09] Running
	I1206 10:27:39.066656  365843 system_pods.go:126] duration metric: took 2.082268071s to wait for k8s-apps to be running ...
	I1206 10:27:39.066669  365843 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 10:27:39.066729  365843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:27:39.081542  365843 system_svc.go:56] duration metric: took 14.863596ms WaitForService to wait for kubelet
	I1206 10:27:39.081575  365843 kubeadm.go:587] duration metric: took 43.685951854s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 10:27:39.081594  365843 node_conditions.go:102] verifying NodePressure condition ...
	I1206 10:27:39.084602  365843 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1206 10:27:39.084634  365843 node_conditions.go:123] node cpu capacity is 2
	I1206 10:27:39.084664  365843 node_conditions.go:105] duration metric: took 3.063624ms to run NodePressure ...
	I1206 10:27:39.084678  365843 start.go:242] waiting for startup goroutines ...
	I1206 10:27:39.249455  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:39.434414  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:39.434617  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:39.483675  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:39.749650  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:39.935787  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:39.935958  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:39.984107  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:40.248590  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:40.434379  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:40.434597  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:40.483640  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:40.749363  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:40.934661  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:40.934820  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:40.983756  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:41.249267  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:41.436521  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:41.436987  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:41.484537  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:41.748793  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:41.935743  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:41.936216  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:41.984724  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:42.249532  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:42.435239  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:42.435341  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:42.484069  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:42.749234  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:42.934576  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:42.934768  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:42.983593  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:43.248700  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:43.435987  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:43.436499  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:43.484377  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:43.749109  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:43.935590  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:43.935864  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:43.984064  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:44.249161  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:44.435074  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:44.435362  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:44.484045  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:44.749200  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:44.935347  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:44.935542  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:44.984613  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:45.251589  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:45.434458  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:45.436464  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:45.484145  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:45.749451  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:45.942275  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:45.943153  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:46.055056  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:46.248971  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:46.434268  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:46.434463  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:46.484410  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:46.749091  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:46.935895  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:46.936081  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:46.984138  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:47.250189  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:47.437085  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:47.438327  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:47.484731  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:47.749835  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:47.936272  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:47.936625  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:47.983876  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:48.249940  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:48.438464  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:48.439189  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:48.484107  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:48.748413  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:48.934505  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:48.934754  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:48.984914  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:49.249687  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:49.434722  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:49.435144  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:49.484328  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:49.749215  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:49.936831  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:49.937268  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:49.984535  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:50.249861  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:50.433909  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:50.433912  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:50.483364  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:50.749136  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:50.934732  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:50.934939  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:50.984308  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:51.249183  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:51.434799  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:51.435682  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:51.484430  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:51.749403  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:51.936389  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:51.937370  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:51.986737  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:52.249779  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:52.436769  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:52.437548  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:52.484102  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:52.749762  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:52.935702  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:52.936085  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:52.984321  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:53.249040  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:53.435866  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:53.436362  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:53.484650  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:53.750880  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:53.935302  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:53.935459  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:53.986291  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:54.248513  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:54.435274  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:54.435499  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:54.483724  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:54.749051  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:54.935581  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:54.935852  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:54.984111  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:55.248434  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:55.434675  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:55.434834  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:55.483673  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:55.749918  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:55.935001  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:55.934774  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:55.984504  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:56.250374  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:56.435243  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:56.435368  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:56.484468  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:56.749862  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:56.935250  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:56.935357  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:56.984951  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:57.248696  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:57.434662  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:57.435414  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:57.484288  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:57.752047  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:57.935526  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:57.935978  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:57.984597  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:58.249336  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:58.436684  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:58.436982  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:58.485017  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:58.765057  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:58.936119  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:58.936437  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:58.985224  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:59.249233  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:59.435603  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:59.435805  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:27:59.483954  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:27:59.749290  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:27:59.937453  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:27:59.938083  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:00.053141  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:00.272777  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:00.501456  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:00.538103  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:00.539150  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:00.748300  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:00.936088  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:00.936325  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:00.984718  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:01.249419  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:01.435863  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:01.435997  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:01.484117  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:01.749299  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:01.935914  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:01.936077  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:01.983944  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:02.250285  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:02.436739  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:02.437141  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:02.484610  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:02.749793  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:02.935538  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:02.935980  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:02.984558  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:03.249759  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:03.434893  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:03.435199  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:03.484193  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:03.749672  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:03.935517  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:03.935825  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:03.984153  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:04.249299  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:04.436659  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:04.436788  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:04.484437  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:04.749536  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:04.934801  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:04.935046  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:04.983614  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:05.249701  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:05.435338  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:05.440437  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:05.484992  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:05.750354  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:05.935567  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:05.935715  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:05.983723  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:06.252797  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:06.435584  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:06.435745  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:06.483731  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:06.762161  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:06.935066  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:06.935272  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:06.986290  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:07.248965  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:07.434684  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:07.434884  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:07.486133  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:07.748781  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:07.938923  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:07.939489  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:07.996088  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:08.248903  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:08.434153  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:08.434514  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:08.484250  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:08.748826  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:08.935444  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:08.936586  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:08.984063  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:09.250335  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:09.436349  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:09.436808  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:09.484175  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:09.749321  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:09.936925  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:09.937346  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:09.984563  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:10.249587  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:10.436132  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:10.436364  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:10.484159  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:10.748491  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:10.935939  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:10.936237  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:10.983809  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:11.248595  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:11.435116  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:11.435277  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:11.484155  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:11.748619  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:11.934997  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:11.935368  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:11.984291  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:12.249101  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:12.435680  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:12.435730  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:12.483853  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:12.749164  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:12.936301  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:12.936755  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:12.984295  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:13.251526  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:13.438777  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:13.439204  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:13.484028  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:13.748035  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:13.935608  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:13.935793  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:13.983943  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:14.250930  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:14.437038  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:14.437263  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:14.484540  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:14.751008  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:14.941395  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:14.949286  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:14.984368  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:15.249253  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:15.434842  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:15.435271  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:15.483947  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:15.749647  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:15.938281  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:15.940832  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:16.040179  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:16.251926  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:16.437381  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:16.438854  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:16.484930  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:16.748949  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:16.935772  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:16.935948  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:16.984613  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:17.248848  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:17.435132  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:17.435479  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:17.484563  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:17.749120  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:17.935168  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:17.935353  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:17.983996  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:18.248758  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:18.434219  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1206 10:28:18.434396  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:18.484383  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:18.749280  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:18.936548  365843 kapi.go:107] duration metric: took 1m17.005741109s to wait for kubernetes.io/minikube-addons=registry ...
	I1206 10:28:18.936840  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:18.984173  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:19.249056  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:19.436492  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:19.484214  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:19.752746  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:19.934934  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:19.984273  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:20.249800  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:20.434437  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:20.485164  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:20.749060  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:20.935694  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:20.986927  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:21.249986  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:21.434374  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:21.484466  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:21.749294  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:21.934677  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:21.984928  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:22.249490  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:22.436784  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:22.536678  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:22.751450  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:22.934081  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:22.984050  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:23.249330  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:23.444522  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:23.485064  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:23.749118  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:23.934582  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:23.985992  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:24.250121  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:24.434137  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:24.484121  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:24.750123  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:24.956832  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:24.983926  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:25.254072  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:25.435977  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:25.484323  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:25.749268  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:25.937797  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:25.984839  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:26.249078  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:26.434559  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:26.484689  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:26.749353  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:26.933867  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:26.984173  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:27.248957  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:27.434298  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:27.487409  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:27.749096  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:27.934098  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:27.984138  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:28.248838  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:28.434045  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:28.483794  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:28.748521  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:28.934229  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:28.984415  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:29.249259  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:29.437327  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:29.485317  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:29.754914  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:29.944286  365843 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1206 10:28:30.056981  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:30.257761  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:30.434615  365843 kapi.go:107] duration metric: took 1m28.504005454s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1206 10:28:30.485776  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:30.749169  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:30.984428  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:31.249492  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:31.484850  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:31.748787  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:31.984406  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:32.249681  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:32.485003  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1206 10:28:32.749270  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:32.984408  365843 kapi.go:107] duration metric: took 1m27.503812031s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1206 10:28:32.987655  365843 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-545880 cluster.
	I1206 10:28:32.990933  365843 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1206 10:28:32.993987  365843 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1206 10:28:33.249021  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:33.749164  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:34.248668  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:34.749644  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:35.249655  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:35.748374  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:36.250277  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:36.749375  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:37.249829  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:37.748586  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:38.249402  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:38.749686  365843 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1206 10:28:39.248556  365843 kapi.go:107] duration metric: took 1m37.003490424s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1206 10:28:39.251914  365843 out.go:179] * Enabled addons: nvidia-device-plugin, storage-provisioner, amd-gpu-device-plugin, storage-provisioner-rancher, inspektor-gadget, registry-creds, cloud-spanner, ingress-dns, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1206 10:28:39.254774  365843 addons.go:530] duration metric: took 1m43.858897793s for enable addons: enabled=[nvidia-device-plugin storage-provisioner amd-gpu-device-plugin storage-provisioner-rancher inspektor-gadget registry-creds cloud-spanner ingress-dns metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1206 10:28:39.254842  365843 start.go:247] waiting for cluster config update ...
	I1206 10:28:39.254867  365843 start.go:256] writing updated cluster config ...
	I1206 10:28:39.255177  365843 ssh_runner.go:195] Run: rm -f paused
	I1206 10:28:39.259881  365843 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 10:28:39.263446  365843 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mf79v" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:28:39.269116  365843 pod_ready.go:94] pod "coredns-66bc5c9577-mf79v" is "Ready"
	I1206 10:28:39.269148  365843 pod_ready.go:86] duration metric: took 5.674593ms for pod "coredns-66bc5c9577-mf79v" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:28:39.271536  365843 pod_ready.go:83] waiting for pod "etcd-addons-545880" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:28:39.276140  365843 pod_ready.go:94] pod "etcd-addons-545880" is "Ready"
	I1206 10:28:39.276165  365843 pod_ready.go:86] duration metric: took 4.594698ms for pod "etcd-addons-545880" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:28:39.278745  365843 pod_ready.go:83] waiting for pod "kube-apiserver-addons-545880" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:28:39.283788  365843 pod_ready.go:94] pod "kube-apiserver-addons-545880" is "Ready"
	I1206 10:28:39.283817  365843 pod_ready.go:86] duration metric: took 5.048471ms for pod "kube-apiserver-addons-545880" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:28:39.286371  365843 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-545880" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:28:39.664488  365843 pod_ready.go:94] pod "kube-controller-manager-addons-545880" is "Ready"
	I1206 10:28:39.664517  365843 pod_ready.go:86] duration metric: took 378.12241ms for pod "kube-controller-manager-addons-545880" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:28:39.864365  365843 pod_ready.go:83] waiting for pod "kube-proxy-9k5w7" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:28:40.263696  365843 pod_ready.go:94] pod "kube-proxy-9k5w7" is "Ready"
	I1206 10:28:40.263808  365843 pod_ready.go:86] duration metric: took 399.402768ms for pod "kube-proxy-9k5w7" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:28:40.465379  365843 pod_ready.go:83] waiting for pod "kube-scheduler-addons-545880" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:28:40.863559  365843 pod_ready.go:94] pod "kube-scheduler-addons-545880" is "Ready"
	I1206 10:28:40.863588  365843 pod_ready.go:86] duration metric: took 397.930534ms for pod "kube-scheduler-addons-545880" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 10:28:40.863602  365843 pod_ready.go:40] duration metric: took 1.603688491s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 10:28:40.927696  365843 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1206 10:28:40.931346  365843 out.go:179] * Done! kubectl is now configured to use "addons-545880" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 06 10:28:42 addons-545880 crio[828]: time="2025-12-06T10:28:42.052759685Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 06 10:28:44 addons-545880 crio[828]: time="2025-12-06T10:28:44.131685582Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=f1fbe04c-4beb-411d-95b6-5bdb74a29961 name=/runtime.v1.ImageService/PullImage
	Dec 06 10:28:44 addons-545880 crio[828]: time="2025-12-06T10:28:44.132456739Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=de28e4b6-e106-465c-9f78-f665ffd312a6 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:28:44 addons-545880 crio[828]: time="2025-12-06T10:28:44.134735389Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f8a45349-0ce2-4265-8c9d-03f515c112fa name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:28:44 addons-545880 crio[828]: time="2025-12-06T10:28:44.142890348Z" level=info msg="Creating container: default/busybox/busybox" id=175b77ad-aa6b-4c64-bd8e-28fcf3b5c72a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 10:28:44 addons-545880 crio[828]: time="2025-12-06T10:28:44.143022961Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 10:28:44 addons-545880 crio[828]: time="2025-12-06T10:28:44.149630821Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 10:28:44 addons-545880 crio[828]: time="2025-12-06T10:28:44.150133652Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 06 10:28:44 addons-545880 crio[828]: time="2025-12-06T10:28:44.16691985Z" level=info msg="Created container f3ca8ee2f4c771d4573eea195ca9d3cda6a4a0ad8220a98316edddebb94c2908: default/busybox/busybox" id=175b77ad-aa6b-4c64-bd8e-28fcf3b5c72a name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 10:28:44 addons-545880 crio[828]: time="2025-12-06T10:28:44.167631848Z" level=info msg="Starting container: f3ca8ee2f4c771d4573eea195ca9d3cda6a4a0ad8220a98316edddebb94c2908" id=9ed4b399-f9bb-4bcf-a77f-c3a3810b762c name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 10:28:44 addons-545880 crio[828]: time="2025-12-06T10:28:44.169377873Z" level=info msg="Started container" PID=4953 containerID=f3ca8ee2f4c771d4573eea195ca9d3cda6a4a0ad8220a98316edddebb94c2908 description=default/busybox/busybox id=9ed4b399-f9bb-4bcf-a77f-c3a3810b762c name=/runtime.v1.RuntimeService/StartContainer sandboxID=517a7fbc44d334610835ee1c7aef90fdb11e20c957b474705e84e042bae92768
	Dec 06 10:28:50 addons-545880 crio[828]: time="2025-12-06T10:28:50.159632437Z" level=info msg="Removing container: 537c152703939e4711c1491bf2aaa0cb69724df7e6a490b5368d5ad051e25e04" id=69acd7b2-718c-4132-9724-e2dbbd9e9662 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 10:28:50 addons-545880 crio[828]: time="2025-12-06T10:28:50.162179831Z" level=info msg="Error loading conmon cgroup of container 537c152703939e4711c1491bf2aaa0cb69724df7e6a490b5368d5ad051e25e04: cgroup deleted" id=69acd7b2-718c-4132-9724-e2dbbd9e9662 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 10:28:50 addons-545880 crio[828]: time="2025-12-06T10:28:50.171695344Z" level=info msg="Removed container 537c152703939e4711c1491bf2aaa0cb69724df7e6a490b5368d5ad051e25e04: gcp-auth/gcp-auth-certs-patch-cdbph/patch" id=69acd7b2-718c-4132-9724-e2dbbd9e9662 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 10:28:50 addons-545880 crio[828]: time="2025-12-06T10:28:50.173440368Z" level=info msg="Removing container: 4dc13e039aee05e85afbd96596bedb2f86c846916f17537c8814bafca53562d1" id=87620cbc-6de4-4101-b439-d88864dd38ca name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 10:28:50 addons-545880 crio[828]: time="2025-12-06T10:28:50.175991029Z" level=info msg="Error loading conmon cgroup of container 4dc13e039aee05e85afbd96596bedb2f86c846916f17537c8814bafca53562d1: cgroup deleted" id=87620cbc-6de4-4101-b439-d88864dd38ca name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 10:28:50 addons-545880 crio[828]: time="2025-12-06T10:28:50.185063821Z" level=info msg="Removed container 4dc13e039aee05e85afbd96596bedb2f86c846916f17537c8814bafca53562d1: gcp-auth/gcp-auth-certs-create-5nx6x/create" id=87620cbc-6de4-4101-b439-d88864dd38ca name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 06 10:28:50 addons-545880 crio[828]: time="2025-12-06T10:28:50.188688017Z" level=info msg="Stopping pod sandbox: 6829d88dab5997161d4ef8765d2a77d53cb6c66ac20b31ae7fa5d2fab09363f7" id=e022f585-254a-4ecd-9b18-a32bd1294552 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 06 10:28:50 addons-545880 crio[828]: time="2025-12-06T10:28:50.188749712Z" level=info msg="Stopped pod sandbox (already stopped): 6829d88dab5997161d4ef8765d2a77d53cb6c66ac20b31ae7fa5d2fab09363f7" id=e022f585-254a-4ecd-9b18-a32bd1294552 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 06 10:28:50 addons-545880 crio[828]: time="2025-12-06T10:28:50.189269257Z" level=info msg="Removing pod sandbox: 6829d88dab5997161d4ef8765d2a77d53cb6c66ac20b31ae7fa5d2fab09363f7" id=b196cbd1-e288-4403-85e6-735aba2f1523 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 06 10:28:50 addons-545880 crio[828]: time="2025-12-06T10:28:50.196406156Z" level=info msg="Removed pod sandbox: 6829d88dab5997161d4ef8765d2a77d53cb6c66ac20b31ae7fa5d2fab09363f7" id=b196cbd1-e288-4403-85e6-735aba2f1523 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 06 10:28:50 addons-545880 crio[828]: time="2025-12-06T10:28:50.197020151Z" level=info msg="Stopping pod sandbox: bd43a2f1363466643085b0d55407d7bec0a5af5336da81ef0faedcfe18c9c3d3" id=2dd1f4bb-bae2-43f7-8221-57c45bbff503 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 06 10:28:50 addons-545880 crio[828]: time="2025-12-06T10:28:50.197067142Z" level=info msg="Stopped pod sandbox (already stopped): bd43a2f1363466643085b0d55407d7bec0a5af5336da81ef0faedcfe18c9c3d3" id=2dd1f4bb-bae2-43f7-8221-57c45bbff503 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 06 10:28:50 addons-545880 crio[828]: time="2025-12-06T10:28:50.19740701Z" level=info msg="Removing pod sandbox: bd43a2f1363466643085b0d55407d7bec0a5af5336da81ef0faedcfe18c9c3d3" id=618b6b5b-060d-44a2-88ef-736fb7d40194 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 06 10:28:50 addons-545880 crio[828]: time="2025-12-06T10:28:50.201913166Z" level=info msg="Removed pod sandbox: bd43a2f1363466643085b0d55407d7bec0a5af5336da81ef0faedcfe18c9c3d3" id=618b6b5b-060d-44a2-88ef-736fb7d40194 name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	f3ca8ee2f4c77       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          10 seconds ago       Running             busybox                                  0                   517a7fbc44d33       busybox                                    default
	76e109752916e       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          15 seconds ago       Running             csi-snapshotter                          0                   465e98299084f       csi-hostpathplugin-t892t                   kube-system
	c3f6082a7a0c7       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          17 seconds ago       Running             csi-provisioner                          0                   465e98299084f       csi-hostpathplugin-t892t                   kube-system
	d59b55d8c46bd       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            19 seconds ago       Running             liveness-probe                           0                   465e98299084f       csi-hostpathplugin-t892t                   kube-system
	ee27785571f14       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           20 seconds ago       Running             hostpath                                 0                   465e98299084f       csi-hostpathplugin-t892t                   kube-system
	8839ed393577d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 21 seconds ago       Running             gcp-auth                                 0                   5c27e30d73602       gcp-auth-78565c9fb4-r2csd                  gcp-auth
	fd515714eb80c       registry.k8s.io/ingress-nginx/controller@sha256:655333e68deab34ee3701f400c4d5d9709000cdfdadb802e4bd7500b027e1259                             24 seconds ago       Running             controller                               0                   c2b9aaf3dd1c6       ingress-nginx-controller-6c8bf45fb-g849k   ingress-nginx
	d44b8d7344fa0       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:fadc7bf59b69965b6707edb68022bed4f55a1f99b15f7acd272793e48f171496                            31 seconds ago       Running             gadget                                   0                   92fa074a37e9b       gadget-hv92d                               gadget
	b58456cd2cfa5       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                35 seconds ago       Running             node-driver-registrar                    0                   465e98299084f       csi-hostpathplugin-t892t                   kube-system
	77b77d1ecb28a       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              36 seconds ago       Running             registry-proxy                           0                   d3da23cc7380d       registry-proxy-j4zp9                       kube-system
	0bb771e3965c7       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              39 seconds ago       Running             csi-resizer                              0                   1ef5e905399af       csi-hostpath-resizer-0                     kube-system
	82475061c7165       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        41 seconds ago       Running             metrics-server                           0                   5408982dabead       metrics-server-85b7d694d7-6j9l7            kube-system
	52d954765a231       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           43 seconds ago       Running             registry                                 0                   0d6dab2fc1c77       registry-6b586f9694-lrjzv                  kube-system
	e292596ad2f80       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      45 seconds ago       Running             volume-snapshot-controller               0                   f1846b10c5ad0       snapshot-controller-7d9fbc56b8-4trlb       kube-system
	5c8f0b1a09ff1       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             46 seconds ago       Running             local-path-provisioner                   0                   bdebb1b5ca15f       local-path-provisioner-648f6765c9-gk2vf    local-path-storage
	ab9b79c2c68c1       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               47 seconds ago       Running             minikube-ingress-dns                     0                   3858fd11cabc3       kube-ingress-dns-minikube                  kube-system
	56ab0813eec7a       32daba64b064c571f27dbd4e285969f47f8e5dd6c692279b48622e941b4d137f                                                                             48 seconds ago       Exited              patch                                    2                   b29638b638993       ingress-nginx-admission-patch-pb2fq        ingress-nginx
	85307815e9595       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e733096c3a5b75504c6380083abc960c9627efd23e099df780adfb4eec197583                   57 seconds ago       Exited              create                                   0                   8684d8ea30d3e       ingress-nginx-admission-create-f28bm       ingress-nginx
	eaaabe40faa63       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   57 seconds ago       Running             csi-external-health-monitor-controller   0                   465e98299084f       csi-hostpathplugin-t892t                   kube-system
	f05b7f270b36a       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               59 seconds ago       Running             cloud-spanner-emulator                   0                   6ccd0e009043d       cloud-spanner-emulator-5bdddb765-c9nvk     default
	aea66f3787491       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   0bef96d69b1e8       nvidia-device-plugin-daemonset-6sbmv       kube-system
	bbd2d73693ff1       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   085dda5e55ccf       snapshot-controller-7d9fbc56b8-ncslf       kube-system
	e656eea2f31f9       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   e6f8f3215b678       yakd-dashboard-5ff678cb9-gcfw2             yakd-dashboard
	778b08b9b628c       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   ec0f8396a1814       csi-hostpath-attacher-0                    kube-system
	358be0ebbc23e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   fc4d9c73e214b       coredns-66bc5c9577-mf79v                   kube-system
	1c178782b46ae       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   f1add1a4b94ab       storage-provisioner                        kube-system
	4f2a86e87c1bf       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                                                             About a minute ago   Running             kube-proxy                               0                   079d34e17a383       kube-proxy-9k5w7                           kube-system
	410f38934f188       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             About a minute ago   Running             kindnet-cni                              0                   5d97ef012cd86       kindnet-fmxlt                              kube-system
	69ffc3958d44b       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                                                             2 minutes ago        Running             kube-scheduler                           0                   aaa0d430bb667       kube-scheduler-addons-545880               kube-system
	66618904a8d73       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                                                             2 minutes ago        Running             kube-controller-manager                  0                   0469ea34da844       kube-controller-manager-addons-545880      kube-system
	9717574a82552       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                                                             2 minutes ago        Running             etcd                                     0                   f64b27320e32b       etcd-addons-545880                         kube-system
	6bfecd83e062d       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                                                             2 minutes ago        Running             kube-apiserver                           0                   83b51357fa75a       kube-apiserver-addons-545880               kube-system
	
	
	==> coredns [358be0ebbc23e420f6fde28e811fd30f1d4064a0e72dfb910c4e719a8d628d3b] <==
	[INFO] 10.244.0.17:45328 - 59002 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000209159s
	[INFO] 10.244.0.17:45328 - 3072 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002260795s
	[INFO] 10.244.0.17:45328 - 35889 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002223166s
	[INFO] 10.244.0.17:45328 - 49176 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000152255s
	[INFO] 10.244.0.17:45328 - 8067 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000077121s
	[INFO] 10.244.0.17:44653 - 46363 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000228523s
	[INFO] 10.244.0.17:44653 - 46133 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000082676s
	[INFO] 10.244.0.17:38277 - 45845 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00009971s
	[INFO] 10.244.0.17:38277 - 45640 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000234891s
	[INFO] 10.244.0.17:51825 - 7575 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000112395s
	[INFO] 10.244.0.17:51825 - 7395 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000107275s
	[INFO] 10.244.0.17:34245 - 25 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001333591s
	[INFO] 10.244.0.17:34245 - 454 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001441916s
	[INFO] 10.244.0.17:50363 - 8960 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000133154s
	[INFO] 10.244.0.17:50363 - 8821 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000253631s
	[INFO] 10.244.0.21:54344 - 16901 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000157187s
	[INFO] 10.244.0.21:56225 - 26260 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000112378s
	[INFO] 10.244.0.21:40270 - 49635 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000090914s
	[INFO] 10.244.0.21:57368 - 56946 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000095197s
	[INFO] 10.244.0.21:58809 - 54875 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000099316s
	[INFO] 10.244.0.21:51237 - 50010 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000071172s
	[INFO] 10.244.0.21:45504 - 44889 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001893227s
	[INFO] 10.244.0.21:44692 - 29443 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001696007s
	[INFO] 10.244.0.21:42279 - 29185 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000579632s
	[INFO] 10.244.0.21:57705 - 26673 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.001400562s
	
	
	==> describe nodes <==
	Name:               addons-545880
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-545880
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=addons-545880
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T10_26_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-545880
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-545880"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 10:26:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-545880
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 10:28:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 10:28:52 +0000   Sat, 06 Dec 2025 10:26:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 10:28:52 +0000   Sat, 06 Dec 2025 10:26:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 10:28:52 +0000   Sat, 06 Dec 2025 10:26:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 10:28:52 +0000   Sat, 06 Dec 2025 10:27:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-545880
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 276ce0203b90767726fe164c6931608e
	  System UUID:                a99aad51-7303-4ca2-bd24-4fd3bb983487
	  Boot ID:                    b73b980d-8d6b-40e0-82fa-5c1b47c1eef7
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  default                     cloud-spanner-emulator-5bdddb765-c9nvk      0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  gadget                      gadget-hv92d                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  gcp-auth                    gcp-auth-78565c9fb4-r2csd                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-g849k    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         113s
	  kube-system                 coredns-66bc5c9577-mf79v                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     119s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 csi-hostpathplugin-t892t                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 etcd-addons-545880                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m4s
	  kube-system                 kindnet-fmxlt                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m
	  kube-system                 kube-apiserver-addons-545880                250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-controller-manager-addons-545880       200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-9k5w7                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-scheduler-addons-545880                100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 metrics-server-85b7d694d7-6j9l7             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         114s
	  kube-system                 nvidia-device-plugin-daemonset-6sbmv        0 (0%)        0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 registry-6b586f9694-lrjzv                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 registry-creds-764b6fb674-nrw5g             0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 registry-proxy-j4zp9                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 snapshot-controller-7d9fbc56b8-4trlb        0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 snapshot-controller-7d9fbc56b8-ncslf        0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  local-path-storage          local-path-provisioner-648f6765c9-gk2vf     0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-gcfw2              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     113s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 117s                   kube-proxy       
	  Normal   NodeHasSufficientMemory  2m11s (x8 over 2m11s)  kubelet          Node addons-545880 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m11s (x8 over 2m11s)  kubelet          Node addons-545880 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m11s (x8 over 2m11s)  kubelet          Node addons-545880 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m4s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m4s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m4s                   kubelet          Node addons-545880 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m4s                   kubelet          Node addons-545880 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m4s                   kubelet          Node addons-545880 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m                     node-controller  Node addons-545880 event: Registered Node addons-545880 in Controller
	  Normal   NodeReady                78s                    kubelet          Node addons-545880 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 6 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014752] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503231] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.065820] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.901896] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.944603] kauditd_printk_skb: 39 callbacks suppressed
	[Dec 6 09:04] hrtimer: interrupt took 32230230 ns
	[Dec 6 10:25] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 6 10:26] overlayfs: idmapped layers are currently not supported
	[  +0.066821] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [9717574a8255200f8dddcf7a2550e63bdb6b4bb664ec25aeb8635f9277183f01] <==
	{"level":"warn","ts":"2025-12-06T10:26:45.972153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:26:45.986921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:26:46.002120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:26:46.052607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:26:46.073844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:26:46.088418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:26:46.115959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:26:46.131509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:26:46.151590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:26:46.184625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:26:46.215467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:26:46.263516Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:26:46.283702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:26:46.320574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:26:46.331871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:26:46.362691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:26:46.377088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:26:46.393600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:26:46.491195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:27:02.508252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:27:02.522562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:27:24.271499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:27:24.301768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:27:24.345868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T10:27:24.372638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54438","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [8839ed393577da519560ec647e7e40f8a85575cec116939c4863405e2813374a] <==
	2025/12/06 10:28:32 GCP Auth Webhook started!
	2025/12/06 10:28:41 Ready to marshal response ...
	2025/12/06 10:28:41 Ready to write response ...
	2025/12/06 10:28:41 Ready to marshal response ...
	2025/12/06 10:28:41 Ready to write response ...
	2025/12/06 10:28:41 Ready to marshal response ...
	2025/12/06 10:28:41 Ready to write response ...
	
	
	==> kernel <==
	 10:28:54 up  2:11,  0 user,  load average: 2.09, 2.16, 1.89
	Linux addons-545880 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [410f38934f188529387872c7a0345e42f47f3295f320a1765aa24e1b9a271d4d] <==
	I1206 10:26:56.459530       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1206 10:26:56.460775       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1206 10:27:26.461264       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1206 10:27:26.461276       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1206 10:27:26.461393       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1206 10:27:26.461468       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1206 10:27:27.659927       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1206 10:27:27.659984       1 metrics.go:72] Registering metrics
	I1206 10:27:27.660043       1 controller.go:711] "Syncing nftables rules"
	I1206 10:27:36.467622       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 10:27:36.467680       1 main.go:301] handling current node
	I1206 10:27:46.459485       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 10:27:46.459527       1 main.go:301] handling current node
	I1206 10:27:56.459453       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 10:27:56.459525       1 main.go:301] handling current node
	I1206 10:28:06.465358       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 10:28:06.465404       1 main.go:301] handling current node
	I1206 10:28:16.459475       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 10:28:16.459510       1 main.go:301] handling current node
	I1206 10:28:26.459685       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 10:28:26.459752       1 main.go:301] handling current node
	I1206 10:28:36.459452       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 10:28:36.459500       1 main.go:301] handling current node
	I1206 10:28:46.459601       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1206 10:28:46.459716       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6bfecd83e062db176f5124191f88157b58c2a91ba34d40d6c82c9fbd3c6fee47] <==
	W1206 10:27:36.774721       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.153.165:443: connect: connection refused
	E1206 10:27:36.774777       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.153.165:443: connect: connection refused" logger="UnhandledError"
	W1206 10:27:36.879082       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.153.165:443: connect: connection refused
	E1206 10:27:36.879933       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.153.165:443: connect: connection refused" logger="UnhandledError"
	W1206 10:28:01.610995       1 handler_proxy.go:99] no RequestInfo found in the context
	E1206 10:28:01.611050       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1206 10:28:01.611064       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1206 10:28:01.612204       1 handler_proxy.go:99] no RequestInfo found in the context
	E1206 10:28:01.612281       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1206 10:28:01.612291       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1206 10:28:24.914394       1 handler_proxy.go:99] no RequestInfo found in the context
	E1206 10:28:24.914473       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1206 10:28:24.915106       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.255.148:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.255.148:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.255.148:443: connect: connection refused" logger="UnhandledError"
	E1206 10:28:24.916610       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.255.148:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.255.148:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.255.148:443: connect: connection refused" logger="UnhandledError"
	E1206 10:28:24.924682       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.255.148:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.255.148:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.255.148:443: connect: connection refused" logger="UnhandledError"
	E1206 10:28:24.946563       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.255.148:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.108.255.148:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.108.255.148:443: connect: connection refused" logger="UnhandledError"
	I1206 10:28:25.141745       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1206 10:28:52.012497       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:49636: use of closed network connection
	E1206 10:28:52.384445       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:49684: use of closed network connection
	
	
	==> kube-controller-manager [66618904a8d73226678429bf63c1faac7f76d45b9de953c282d294fedfc2cfb6] <==
	I1206 10:26:54.295088       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1206 10:26:54.295142       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1206 10:26:54.296606       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1206 10:26:54.296886       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1206 10:26:54.300981       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1206 10:26:54.304407       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1206 10:26:54.304916       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1206 10:26:54.307399       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1206 10:26:54.321190       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 10:26:54.321216       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1206 10:26:54.321231       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1206 10:26:54.344104       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	E1206 10:27:00.742925       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1206 10:27:24.259525       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1206 10:27:24.259676       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1206 10:27:24.259739       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1206 10:27:24.308875       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1206 10:27:24.329800       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1206 10:27:24.360388       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 10:27:24.430348       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 10:27:39.249915       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1206 10:27:54.365739       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1206 10:27:54.442301       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1206 10:28:24.370197       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1206 10:28:24.460281       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [4f2a86e87c1bf385e11b164e78ea4f4e9844b0534c9bec2d841dfb406fec8a56] <==
	I1206 10:26:56.376938       1 server_linux.go:53] "Using iptables proxy"
	I1206 10:26:56.514056       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 10:26:56.614713       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 10:26:56.615249       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1206 10:26:56.615371       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 10:26:56.667473       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 10:26:56.667837       1 server_linux.go:132] "Using iptables Proxier"
	I1206 10:26:56.676194       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 10:26:56.676514       1 server.go:527] "Version info" version="v1.34.2"
	I1206 10:26:56.676531       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 10:26:56.677922       1 config.go:200] "Starting service config controller"
	I1206 10:26:56.677932       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 10:26:56.677948       1 config.go:106] "Starting endpoint slice config controller"
	I1206 10:26:56.677952       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 10:26:56.677968       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 10:26:56.677972       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 10:26:56.678578       1 config.go:309] "Starting node config controller"
	I1206 10:26:56.678586       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 10:26:56.678592       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 10:26:56.778927       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 10:26:56.778961       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 10:26:56.779000       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [69ffc3958d44bd262b1360fdb7c52481a97a7e588cd4d05224b3704341139dd0] <==
	E1206 10:26:47.508706       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1206 10:26:47.508852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 10:26:47.508975       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1206 10:26:47.509082       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 10:26:47.509195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 10:26:47.509296       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 10:26:47.509400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 10:26:47.509499       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 10:26:47.509604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 10:26:47.509726       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 10:26:47.509877       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 10:26:47.510000       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1206 10:26:47.510102       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1206 10:26:47.510281       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 10:26:47.510460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 10:26:48.341133       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 10:26:48.397111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1206 10:26:48.450311       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 10:26:48.490818       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 10:26:48.517246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1206 10:26:48.523629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1206 10:26:48.632507       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1206 10:26:48.723483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 10:26:48.837152       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1206 10:26:51.795314       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 06 10:28:14 addons-545880 kubelet[1285]: I1206 10:28:14.990409    1285 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a303eb07-7b06-447b-807e-485efbf0f2fa-kube-api-access-vj2xc" (OuterVolumeSpecName: "kube-api-access-vj2xc") pod "a303eb07-7b06-447b-807e-485efbf0f2fa" (UID: "a303eb07-7b06-447b-807e-485efbf0f2fa"). InnerVolumeSpecName "kube-api-access-vj2xc". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 06 10:28:15 addons-545880 kubelet[1285]: I1206 10:28:15.083111    1285 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vj2xc\" (UniqueName: \"kubernetes.io/projected/a303eb07-7b06-447b-807e-485efbf0f2fa-kube-api-access-vj2xc\") on node \"addons-545880\" DevicePath \"\""
	Dec 06 10:28:15 addons-545880 kubelet[1285]: I1206 10:28:15.851831    1285 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd43a2f1363466643085b0d55407d7bec0a5af5336da81ef0faedcfe18c9c3d3"
	Dec 06 10:28:18 addons-545880 kubelet[1285]: I1206 10:28:18.867800    1285 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-j4zp9" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 10:28:19 addons-545880 kubelet[1285]: I1206 10:28:19.877645    1285 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-j4zp9" secret="" err="secret \"gcp-auth\" not found"
	Dec 06 10:28:22 addons-545880 kubelet[1285]: I1206 10:28:22.913564    1285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-j4zp9" podStartSLOduration=6.814699314 podStartE2EDuration="46.913544776s" podCreationTimestamp="2025-12-06 10:27:36 +0000 UTC" firstStartedPulling="2025-12-06 10:27:37.948198737 +0000 UTC m=+47.958006876" lastFinishedPulling="2025-12-06 10:28:18.047044117 +0000 UTC m=+88.056852338" observedRunningTime="2025-12-06 10:28:18.891808735 +0000 UTC m=+88.901616882" watchObservedRunningTime="2025-12-06 10:28:22.913544776 +0000 UTC m=+92.923352915"
	Dec 06 10:28:24 addons-545880 kubelet[1285]: I1206 10:28:24.883857    1285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-hv92d" podStartSLOduration=68.420975255 podStartE2EDuration="1m24.883837665s" podCreationTimestamp="2025-12-06 10:27:00 +0000 UTC" firstStartedPulling="2025-12-06 10:28:06.198286591 +0000 UTC m=+76.208094729" lastFinishedPulling="2025-12-06 10:28:22.661148959 +0000 UTC m=+92.670957139" observedRunningTime="2025-12-06 10:28:22.914832369 +0000 UTC m=+92.924640516" watchObservedRunningTime="2025-12-06 10:28:24.883837665 +0000 UTC m=+94.893645812"
	Dec 06 10:28:29 addons-545880 kubelet[1285]: I1206 10:28:29.953553    1285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-6c8bf45fb-g849k" podStartSLOduration=68.254443377 podStartE2EDuration="1m28.953525955s" podCreationTimestamp="2025-12-06 10:27:01 +0000 UTC" firstStartedPulling="2025-12-06 10:28:08.998431192 +0000 UTC m=+79.008239331" lastFinishedPulling="2025-12-06 10:28:29.69751377 +0000 UTC m=+99.707321909" observedRunningTime="2025-12-06 10:28:29.952599719 +0000 UTC m=+99.962407874" watchObservedRunningTime="2025-12-06 10:28:29.953525955 +0000 UTC m=+99.963334102"
	Dec 06 10:28:32 addons-545880 kubelet[1285]: I1206 10:28:32.955258    1285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-r2csd" podStartSLOduration=64.24652024 podStartE2EDuration="1m27.955240914s" podCreationTimestamp="2025-12-06 10:27:05 +0000 UTC" firstStartedPulling="2025-12-06 10:28:09.039214202 +0000 UTC m=+79.049022341" lastFinishedPulling="2025-12-06 10:28:32.747934877 +0000 UTC m=+102.757743015" observedRunningTime="2025-12-06 10:28:32.954225488 +0000 UTC m=+102.964033643" watchObservedRunningTime="2025-12-06 10:28:32.955240914 +0000 UTC m=+102.965049053"
	Dec 06 10:28:34 addons-545880 kubelet[1285]: I1206 10:28:34.365720    1285 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Dec 06 10:28:34 addons-545880 kubelet[1285]: I1206 10:28:34.365771    1285 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Dec 06 10:28:39 addons-545880 kubelet[1285]: I1206 10:28:39.014204    1285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-t892t" podStartSLOduration=2.272168032 podStartE2EDuration="1m3.014186266s" podCreationTimestamp="2025-12-06 10:27:36 +0000 UTC" firstStartedPulling="2025-12-06 10:27:37.428786091 +0000 UTC m=+47.438594230" lastFinishedPulling="2025-12-06 10:28:38.170804317 +0000 UTC m=+108.180612464" observedRunningTime="2025-12-06 10:28:39.013877611 +0000 UTC m=+109.023685766" watchObservedRunningTime="2025-12-06 10:28:39.014186266 +0000 UTC m=+109.023994405"
	Dec 06 10:28:40 addons-545880 kubelet[1285]: E1206 10:28:40.828737    1285 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Dec 06 10:28:40 addons-545880 kubelet[1285]: E1206 10:28:40.828871    1285 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db673d4c-19b0-4f94-bf04-a6cbd90211bf-gcr-creds podName:db673d4c-19b0-4f94-bf04-a6cbd90211bf nodeName:}" failed. No retries permitted until 2025-12-06 10:29:44.828846336 +0000 UTC m=+174.838654475 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/db673d4c-19b0-4f94-bf04-a6cbd90211bf-gcr-creds") pod "registry-creds-764b6fb674-nrw5g" (UID: "db673d4c-19b0-4f94-bf04-a6cbd90211bf") : secret "registry-creds-gcr" not found
	Dec 06 10:28:41 addons-545880 kubelet[1285]: I1206 10:28:41.842133    1285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/7eb75450-8c10-4096-88ce-6b4ee4f0598f-gcp-creds\") pod \"busybox\" (UID: \"7eb75450-8c10-4096-88ce-6b4ee4f0598f\") " pod="default/busybox"
	Dec 06 10:28:41 addons-545880 kubelet[1285]: I1206 10:28:41.842230    1285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvp4b\" (UniqueName: \"kubernetes.io/projected/7eb75450-8c10-4096-88ce-6b4ee4f0598f-kube-api-access-nvp4b\") pod \"busybox\" (UID: \"7eb75450-8c10-4096-88ce-6b4ee4f0598f\") " pod="default/busybox"
	Dec 06 10:28:42 addons-545880 kubelet[1285]: I1206 10:28:42.215543    1285 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0bab77b-e3e5-415b-9127-afd99b854e57" path="/var/lib/kubelet/pods/d0bab77b-e3e5-415b-9127-afd99b854e57/volumes"
	Dec 06 10:28:45 addons-545880 kubelet[1285]: I1206 10:28:45.086397    1285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=2.000512931 podStartE2EDuration="4.086373888s" podCreationTimestamp="2025-12-06 10:28:41 +0000 UTC" firstStartedPulling="2025-12-06 10:28:42.047521141 +0000 UTC m=+112.057329279" lastFinishedPulling="2025-12-06 10:28:44.133382089 +0000 UTC m=+114.143190236" observedRunningTime="2025-12-06 10:28:45.085727154 +0000 UTC m=+115.095535695" watchObservedRunningTime="2025-12-06 10:28:45.086373888 +0000 UTC m=+115.096182026"
	Dec 06 10:28:46 addons-545880 kubelet[1285]: I1206 10:28:46.213649    1285 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a303eb07-7b06-447b-807e-485efbf0f2fa" path="/var/lib/kubelet/pods/a303eb07-7b06-447b-807e-485efbf0f2fa/volumes"
	Dec 06 10:28:50 addons-545880 kubelet[1285]: I1206 10:28:50.157796    1285 scope.go:117] "RemoveContainer" containerID="537c152703939e4711c1491bf2aaa0cb69724df7e6a490b5368d5ad051e25e04"
	Dec 06 10:28:50 addons-545880 kubelet[1285]: I1206 10:28:50.171979    1285 scope.go:117] "RemoveContainer" containerID="4dc13e039aee05e85afbd96596bedb2f86c846916f17537c8814bafca53562d1"
	Dec 06 10:28:50 addons-545880 kubelet[1285]: E1206 10:28:50.290607    1285 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/cc6075459e4071c6e94b0e7c20ce57bab80c2d832f722c0fa308768cfaaad5cd/diff" to get inode usage: stat /var/lib/containers/storage/overlay/cc6075459e4071c6e94b0e7c20ce57bab80c2d832f722c0fa308768cfaaad5cd/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/ingress-nginx_ingress-nginx-admission-patch-pb2fq_e0939485-ad5f-4e5b-9c1a-bb3fd946742a/patch/1.log" to get inode usage: stat /var/log/pods/ingress-nginx_ingress-nginx-admission-patch-pb2fq_e0939485-ad5f-4e5b-9c1a-bb3fd946742a/patch/1.log: no such file or directory
	Dec 06 10:28:50 addons-545880 kubelet[1285]: E1206 10:28:50.290941    1285 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/f8e6eafde4b9e62a1d811f58fa1025497c858aaf7d36e202e1a17a9dbc5741cb/diff" to get inode usage: stat /var/lib/containers/storage/overlay/f8e6eafde4b9e62a1d811f58fa1025497c858aaf7d36e202e1a17a9dbc5741cb/diff: no such file or directory, extraDiskErr: <nil>
	Dec 06 10:28:50 addons-545880 kubelet[1285]: E1206 10:28:50.307047    1285 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/81f058ee908498ed11b55ed17cb72906b49b4f7f2fd585160feef84e86d63086/diff" to get inode usage: stat /var/lib/containers/storage/overlay/81f058ee908498ed11b55ed17cb72906b49b4f7f2fd585160feef84e86d63086/diff: no such file or directory, extraDiskErr: <nil>
	Dec 06 10:28:54 addons-545880 kubelet[1285]: I1206 10:28:54.211495    1285 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-6sbmv" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [1c178782b46ae3df28453a2dd88fc57e38eb824abae86db11976cc74cf8b87be] <==
	W1206 10:28:30.241964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:28:32.245552       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:28:32.254721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:28:34.258319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:28:34.262981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:28:36.266894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:28:36.277328       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:28:38.280423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:28:38.287810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:28:40.290561       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:28:40.297505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:28:42.300648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:28:42.305774       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:28:44.308757       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:28:44.313305       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:28:46.317379       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:28:46.322213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:28:48.325819       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:28:48.332743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:28:50.336834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:28:50.341975       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:28:52.355358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:28:52.366004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:28:54.369476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1206 10:28:54.380370       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-545880 -n addons-545880
helpers_test.go:269: (dbg) Run:  kubectl --context addons-545880 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-f28bm ingress-nginx-admission-patch-pb2fq registry-creds-764b6fb674-nrw5g
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-545880 describe pod ingress-nginx-admission-create-f28bm ingress-nginx-admission-patch-pb2fq registry-creds-764b6fb674-nrw5g
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-545880 describe pod ingress-nginx-admission-create-f28bm ingress-nginx-admission-patch-pb2fq registry-creds-764b6fb674-nrw5g: exit status 1 (91.798716ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-f28bm" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-pb2fq" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-nrw5g" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-545880 describe pod ingress-nginx-admission-create-f28bm ingress-nginx-admission-patch-pb2fq registry-creds-764b6fb674-nrw5g: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-545880 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-545880 addons disable headlamp --alsologtostderr -v=1: exit status 11 (283.159055ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:28:55.745434  372418 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:28:55.746253  372418 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:28:55.746275  372418 out.go:374] Setting ErrFile to fd 2...
	I1206 10:28:55.746281  372418 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:28:55.746559  372418 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 10:28:55.746868  372418 mustload.go:66] Loading cluster: addons-545880
	I1206 10:28:55.747261  372418 config.go:182] Loaded profile config "addons-545880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:28:55.747280  372418 addons.go:622] checking whether the cluster is paused
	I1206 10:28:55.747441  372418 config.go:182] Loaded profile config "addons-545880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:28:55.747459  372418 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:28:55.747990  372418 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:28:55.767623  372418 ssh_runner.go:195] Run: systemctl --version
	I1206 10:28:55.767682  372418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:28:55.787618  372418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:28:55.894514  372418 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:28:55.894643  372418 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:28:55.933416  372418 cri.go:89] found id: "76e109752916eb227cc2778fc40189f2225fe99abbb5caa1dc492604fa63b088"
	I1206 10:28:55.933447  372418 cri.go:89] found id: "c3f6082a7a0c7c8725c19d46cd708aeb5d4126a349db5fe93809b3ef79169052"
	I1206 10:28:55.933453  372418 cri.go:89] found id: "d59b55d8c46bd316322b59f76bbed7bf1ba7ae09f22a8d7446896bb650747b97"
	I1206 10:28:55.933457  372418 cri.go:89] found id: "ee27785571f1406c526ba554d46111adfc871bf6b5094f993b79d922ed4e4e88"
	I1206 10:28:55.933460  372418 cri.go:89] found id: "b58456cd2cfa54ef5616f519f55a6b7b272d08f96ca019bf4d2f47f9dc581de3"
	I1206 10:28:55.933464  372418 cri.go:89] found id: "77b77d1ecb28a6271e776faf9148345a91cf28a8eb40f9adc7343e6d90864f3a"
	I1206 10:28:55.933467  372418 cri.go:89] found id: "0bb771e3965c7313e7a976270ee1cf4f72f901f19cf787e7ef330577f83ca8b0"
	I1206 10:28:55.933470  372418 cri.go:89] found id: "82475061c71650dc2d5ef1c1b6fb59dc1e8d85ff79c3598c514ad231134b1d1a"
	I1206 10:28:55.933473  372418 cri.go:89] found id: "52d954765a231dbdcd394aa043b7231f3b45f20db74ede3718de67caabeea5a3"
	I1206 10:28:55.933480  372418 cri.go:89] found id: "e292596ad2f80045ad3b706145d35d90657c46cc5300b047c28f357a09003684"
	I1206 10:28:55.933485  372418 cri.go:89] found id: "ab9b79c2c68c1be8095a1a81cd7d444d52723042c6629740074d930656007cfd"
	I1206 10:28:55.933488  372418 cri.go:89] found id: "eaaabe40faa63af2c6b5e0ffb01fdbff88ff53227bb4a4b884fca2db86a16b38"
	I1206 10:28:55.933491  372418 cri.go:89] found id: "aea66f37874913be4b5420f3d08acfb0b6388ccfb25c63270ce6741cf675ba44"
	I1206 10:28:55.933494  372418 cri.go:89] found id: "bbd2d73693ff14927141ea51103bb4d99dce673d1531632ca460362ab91bc129"
	I1206 10:28:55.933497  372418 cri.go:89] found id: "778b08b9b628cb82a3c8742868fe4b9a4b0dbad3c250600336afae611d54dcfd"
	I1206 10:28:55.933502  372418 cri.go:89] found id: "358be0ebbc23e420f6fde28e811fd30f1d4064a0e72dfb910c4e719a8d628d3b"
	I1206 10:28:55.933505  372418 cri.go:89] found id: "1c178782b46ae3df28453a2dd88fc57e38eb824abae86db11976cc74cf8b87be"
	I1206 10:28:55.933509  372418 cri.go:89] found id: "4f2a86e87c1bf385e11b164e78ea4f4e9844b0534c9bec2d841dfb406fec8a56"
	I1206 10:28:55.933512  372418 cri.go:89] found id: "410f38934f188529387872c7a0345e42f47f3295f320a1765aa24e1b9a271d4d"
	I1206 10:28:55.933515  372418 cri.go:89] found id: "69ffc3958d44bd262b1360fdb7c52481a97a7e588cd4d05224b3704341139dd0"
	I1206 10:28:55.933519  372418 cri.go:89] found id: "66618904a8d73226678429bf63c1faac7f76d45b9de953c282d294fedfc2cfb6"
	I1206 10:28:55.933522  372418 cri.go:89] found id: "9717574a8255200f8dddcf7a2550e63bdb6b4bb664ec25aeb8635f9277183f01"
	I1206 10:28:55.933525  372418 cri.go:89] found id: "6bfecd83e062db176f5124191f88157b58c2a91ba34d40d6c82c9fbd3c6fee47"
	I1206 10:28:55.933528  372418 cri.go:89] found id: ""
	I1206 10:28:55.933589  372418 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 10:28:55.951697  372418 out.go:203] 
	W1206 10:28:55.954632  372418 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:28:55Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:28:55Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 10:28:55.954663  372418 out.go:285] * 
	* 
	W1206 10:28:55.960050  372418 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:28:55.963240  372418 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-545880 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.30s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-c9nvk" [f6a3ce7d-bcb1-4ed2-9745-2167ad011ea9] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003390062s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-545880 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-545880 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (264.486283ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:30:04.137030  374368 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:30:04.137808  374368 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:30:04.137851  374368 out.go:374] Setting ErrFile to fd 2...
	I1206 10:30:04.137877  374368 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:30:04.138164  374368 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 10:30:04.138521  374368 mustload.go:66] Loading cluster: addons-545880
	I1206 10:30:04.138960  374368 config.go:182] Loaded profile config "addons-545880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:30:04.139008  374368 addons.go:622] checking whether the cluster is paused
	I1206 10:30:04.139148  374368 config.go:182] Loaded profile config "addons-545880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:30:04.139187  374368 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:30:04.139800  374368 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:30:04.158358  374368 ssh_runner.go:195] Run: systemctl --version
	I1206 10:30:04.158412  374368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:30:04.177856  374368 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:30:04.286377  374368 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:30:04.286514  374368 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:30:04.317716  374368 cri.go:89] found id: "073a3a9a15ff65f14a591fc9ca890d7531c5e1ecdae14e7fb2c824a638d738ea"
	I1206 10:30:04.317736  374368 cri.go:89] found id: "76e109752916eb227cc2778fc40189f2225fe99abbb5caa1dc492604fa63b088"
	I1206 10:30:04.317741  374368 cri.go:89] found id: "c3f6082a7a0c7c8725c19d46cd708aeb5d4126a349db5fe93809b3ef79169052"
	I1206 10:30:04.317744  374368 cri.go:89] found id: "d59b55d8c46bd316322b59f76bbed7bf1ba7ae09f22a8d7446896bb650747b97"
	I1206 10:30:04.317748  374368 cri.go:89] found id: "ee27785571f1406c526ba554d46111adfc871bf6b5094f993b79d922ed4e4e88"
	I1206 10:30:04.317751  374368 cri.go:89] found id: "b58456cd2cfa54ef5616f519f55a6b7b272d08f96ca019bf4d2f47f9dc581de3"
	I1206 10:30:04.317754  374368 cri.go:89] found id: "77b77d1ecb28a6271e776faf9148345a91cf28a8eb40f9adc7343e6d90864f3a"
	I1206 10:30:04.317760  374368 cri.go:89] found id: "0bb771e3965c7313e7a976270ee1cf4f72f901f19cf787e7ef330577f83ca8b0"
	I1206 10:30:04.317771  374368 cri.go:89] found id: "82475061c71650dc2d5ef1c1b6fb59dc1e8d85ff79c3598c514ad231134b1d1a"
	I1206 10:30:04.317777  374368 cri.go:89] found id: "52d954765a231dbdcd394aa043b7231f3b45f20db74ede3718de67caabeea5a3"
	I1206 10:30:04.317785  374368 cri.go:89] found id: "e292596ad2f80045ad3b706145d35d90657c46cc5300b047c28f357a09003684"
	I1206 10:30:04.317788  374368 cri.go:89] found id: "ab9b79c2c68c1be8095a1a81cd7d444d52723042c6629740074d930656007cfd"
	I1206 10:30:04.317791  374368 cri.go:89] found id: "eaaabe40faa63af2c6b5e0ffb01fdbff88ff53227bb4a4b884fca2db86a16b38"
	I1206 10:30:04.317795  374368 cri.go:89] found id: "aea66f37874913be4b5420f3d08acfb0b6388ccfb25c63270ce6741cf675ba44"
	I1206 10:30:04.317798  374368 cri.go:89] found id: "bbd2d73693ff14927141ea51103bb4d99dce673d1531632ca460362ab91bc129"
	I1206 10:30:04.317803  374368 cri.go:89] found id: "778b08b9b628cb82a3c8742868fe4b9a4b0dbad3c250600336afae611d54dcfd"
	I1206 10:30:04.317809  374368 cri.go:89] found id: "358be0ebbc23e420f6fde28e811fd30f1d4064a0e72dfb910c4e719a8d628d3b"
	I1206 10:30:04.317813  374368 cri.go:89] found id: "1c178782b46ae3df28453a2dd88fc57e38eb824abae86db11976cc74cf8b87be"
	I1206 10:30:04.317816  374368 cri.go:89] found id: "4f2a86e87c1bf385e11b164e78ea4f4e9844b0534c9bec2d841dfb406fec8a56"
	I1206 10:30:04.317819  374368 cri.go:89] found id: "410f38934f188529387872c7a0345e42f47f3295f320a1765aa24e1b9a271d4d"
	I1206 10:30:04.317823  374368 cri.go:89] found id: "69ffc3958d44bd262b1360fdb7c52481a97a7e588cd4d05224b3704341139dd0"
	I1206 10:30:04.317826  374368 cri.go:89] found id: "66618904a8d73226678429bf63c1faac7f76d45b9de953c282d294fedfc2cfb6"
	I1206 10:30:04.317829  374368 cri.go:89] found id: "9717574a8255200f8dddcf7a2550e63bdb6b4bb664ec25aeb8635f9277183f01"
	I1206 10:30:04.317833  374368 cri.go:89] found id: "6bfecd83e062db176f5124191f88157b58c2a91ba34d40d6c82c9fbd3c6fee47"
	I1206 10:30:04.317836  374368 cri.go:89] found id: ""
	I1206 10:30:04.317898  374368 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 10:30:04.335183  374368 out.go:203] 
	W1206 10:30:04.338204  374368 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:30:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:30:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 10:30:04.338237  374368 out.go:285] * 
	* 
	W1206 10:30:04.343418  374368 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:30:04.346518  374368 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-545880 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.27s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.45s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-545880 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-545880 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-545880 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-545880 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-545880 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-545880 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-545880 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-545880 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [1e24cceb-241c-4d14-b08c-e8866b981eeb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [1e24cceb-241c-4d14-b08c-e8866b981eeb] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [1e24cceb-241c-4d14-b08c-e8866b981eeb] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003245768s
addons_test.go:967: (dbg) Run:  kubectl --context addons-545880 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-545880 ssh "cat /opt/local-path-provisioner/pvc-e0007cf2-fb61-4410-a0d1-11cea273d032_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-545880 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-545880 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-545880 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-545880 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (287.210633ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:29:58.843296  374261 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:29:58.844203  374261 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:29:58.844242  374261 out.go:374] Setting ErrFile to fd 2...
	I1206 10:29:58.844266  374261 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:29:58.844573  374261 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 10:29:58.844916  374261 mustload.go:66] Loading cluster: addons-545880
	I1206 10:29:58.845415  374261 config.go:182] Loaded profile config "addons-545880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:29:58.845459  374261 addons.go:622] checking whether the cluster is paused
	I1206 10:29:58.845611  374261 config.go:182] Loaded profile config "addons-545880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:29:58.845642  374261 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:29:58.846223  374261 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:29:58.868031  374261 ssh_runner.go:195] Run: systemctl --version
	I1206 10:29:58.868090  374261 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:29:58.886572  374261 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:29:58.998352  374261 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:29:58.998442  374261 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:29:59.035251  374261 cri.go:89] found id: "073a3a9a15ff65f14a591fc9ca890d7531c5e1ecdae14e7fb2c824a638d738ea"
	I1206 10:29:59.035281  374261 cri.go:89] found id: "76e109752916eb227cc2778fc40189f2225fe99abbb5caa1dc492604fa63b088"
	I1206 10:29:59.035291  374261 cri.go:89] found id: "c3f6082a7a0c7c8725c19d46cd708aeb5d4126a349db5fe93809b3ef79169052"
	I1206 10:29:59.035295  374261 cri.go:89] found id: "d59b55d8c46bd316322b59f76bbed7bf1ba7ae09f22a8d7446896bb650747b97"
	I1206 10:29:59.035299  374261 cri.go:89] found id: "ee27785571f1406c526ba554d46111adfc871bf6b5094f993b79d922ed4e4e88"
	I1206 10:29:59.035303  374261 cri.go:89] found id: "b58456cd2cfa54ef5616f519f55a6b7b272d08f96ca019bf4d2f47f9dc581de3"
	I1206 10:29:59.035307  374261 cri.go:89] found id: "77b77d1ecb28a6271e776faf9148345a91cf28a8eb40f9adc7343e6d90864f3a"
	I1206 10:29:59.035310  374261 cri.go:89] found id: "0bb771e3965c7313e7a976270ee1cf4f72f901f19cf787e7ef330577f83ca8b0"
	I1206 10:29:59.035313  374261 cri.go:89] found id: "82475061c71650dc2d5ef1c1b6fb59dc1e8d85ff79c3598c514ad231134b1d1a"
	I1206 10:29:59.035322  374261 cri.go:89] found id: "52d954765a231dbdcd394aa043b7231f3b45f20db74ede3718de67caabeea5a3"
	I1206 10:29:59.035332  374261 cri.go:89] found id: "e292596ad2f80045ad3b706145d35d90657c46cc5300b047c28f357a09003684"
	I1206 10:29:59.035336  374261 cri.go:89] found id: "ab9b79c2c68c1be8095a1a81cd7d444d52723042c6629740074d930656007cfd"
	I1206 10:29:59.035339  374261 cri.go:89] found id: "eaaabe40faa63af2c6b5e0ffb01fdbff88ff53227bb4a4b884fca2db86a16b38"
	I1206 10:29:59.035343  374261 cri.go:89] found id: "aea66f37874913be4b5420f3d08acfb0b6388ccfb25c63270ce6741cf675ba44"
	I1206 10:29:59.035346  374261 cri.go:89] found id: "bbd2d73693ff14927141ea51103bb4d99dce673d1531632ca460362ab91bc129"
	I1206 10:29:59.035352  374261 cri.go:89] found id: "778b08b9b628cb82a3c8742868fe4b9a4b0dbad3c250600336afae611d54dcfd"
	I1206 10:29:59.035436  374261 cri.go:89] found id: "358be0ebbc23e420f6fde28e811fd30f1d4064a0e72dfb910c4e719a8d628d3b"
	I1206 10:29:59.035455  374261 cri.go:89] found id: "1c178782b46ae3df28453a2dd88fc57e38eb824abae86db11976cc74cf8b87be"
	I1206 10:29:59.035472  374261 cri.go:89] found id: "4f2a86e87c1bf385e11b164e78ea4f4e9844b0534c9bec2d841dfb406fec8a56"
	I1206 10:29:59.035476  374261 cri.go:89] found id: "410f38934f188529387872c7a0345e42f47f3295f320a1765aa24e1b9a271d4d"
	I1206 10:29:59.035481  374261 cri.go:89] found id: "69ffc3958d44bd262b1360fdb7c52481a97a7e588cd4d05224b3704341139dd0"
	I1206 10:29:59.035489  374261 cri.go:89] found id: "66618904a8d73226678429bf63c1faac7f76d45b9de953c282d294fedfc2cfb6"
	I1206 10:29:59.035493  374261 cri.go:89] found id: "9717574a8255200f8dddcf7a2550e63bdb6b4bb664ec25aeb8635f9277183f01"
	I1206 10:29:59.035500  374261 cri.go:89] found id: "6bfecd83e062db176f5124191f88157b58c2a91ba34d40d6c82c9fbd3c6fee47"
	I1206 10:29:59.035503  374261 cri.go:89] found id: ""
	I1206 10:29:59.035586  374261 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 10:29:59.061347  374261 out.go:203] 
	W1206 10:29:59.064470  374261 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:29:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:29:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 10:29:59.064553  374261 out.go:285] * 
	* 
	W1206 10:29:59.069655  374261 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:29:59.072959  374261 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-545880 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (9.45s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-6sbmv" [255b0abe-864c-4ff8-9125-e4ffb052005c] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003568476s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-545880 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-545880 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (271.52568ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:29:44.114476  373801 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:29:44.115434  373801 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:29:44.115454  373801 out.go:374] Setting ErrFile to fd 2...
	I1206 10:29:44.115461  373801 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:29:44.115743  373801 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 10:29:44.116047  373801 mustload.go:66] Loading cluster: addons-545880
	I1206 10:29:44.116416  373801 config.go:182] Loaded profile config "addons-545880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:29:44.116434  373801 addons.go:622] checking whether the cluster is paused
	I1206 10:29:44.116546  373801 config.go:182] Loaded profile config "addons-545880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:29:44.116561  373801 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:29:44.117055  373801 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:29:44.138773  373801 ssh_runner.go:195] Run: systemctl --version
	I1206 10:29:44.138835  373801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:29:44.158391  373801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:29:44.265910  373801 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:29:44.266024  373801 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:29:44.301045  373801 cri.go:89] found id: "76e109752916eb227cc2778fc40189f2225fe99abbb5caa1dc492604fa63b088"
	I1206 10:29:44.301065  373801 cri.go:89] found id: "c3f6082a7a0c7c8725c19d46cd708aeb5d4126a349db5fe93809b3ef79169052"
	I1206 10:29:44.301070  373801 cri.go:89] found id: "d59b55d8c46bd316322b59f76bbed7bf1ba7ae09f22a8d7446896bb650747b97"
	I1206 10:29:44.301074  373801 cri.go:89] found id: "ee27785571f1406c526ba554d46111adfc871bf6b5094f993b79d922ed4e4e88"
	I1206 10:29:44.301077  373801 cri.go:89] found id: "b58456cd2cfa54ef5616f519f55a6b7b272d08f96ca019bf4d2f47f9dc581de3"
	I1206 10:29:44.301080  373801 cri.go:89] found id: "77b77d1ecb28a6271e776faf9148345a91cf28a8eb40f9adc7343e6d90864f3a"
	I1206 10:29:44.301083  373801 cri.go:89] found id: "0bb771e3965c7313e7a976270ee1cf4f72f901f19cf787e7ef330577f83ca8b0"
	I1206 10:29:44.301086  373801 cri.go:89] found id: "82475061c71650dc2d5ef1c1b6fb59dc1e8d85ff79c3598c514ad231134b1d1a"
	I1206 10:29:44.301089  373801 cri.go:89] found id: "52d954765a231dbdcd394aa043b7231f3b45f20db74ede3718de67caabeea5a3"
	I1206 10:29:44.301096  373801 cri.go:89] found id: "e292596ad2f80045ad3b706145d35d90657c46cc5300b047c28f357a09003684"
	I1206 10:29:44.301099  373801 cri.go:89] found id: "ab9b79c2c68c1be8095a1a81cd7d444d52723042c6629740074d930656007cfd"
	I1206 10:29:44.301102  373801 cri.go:89] found id: "eaaabe40faa63af2c6b5e0ffb01fdbff88ff53227bb4a4b884fca2db86a16b38"
	I1206 10:29:44.301105  373801 cri.go:89] found id: "aea66f37874913be4b5420f3d08acfb0b6388ccfb25c63270ce6741cf675ba44"
	I1206 10:29:44.301108  373801 cri.go:89] found id: "bbd2d73693ff14927141ea51103bb4d99dce673d1531632ca460362ab91bc129"
	I1206 10:29:44.301112  373801 cri.go:89] found id: "778b08b9b628cb82a3c8742868fe4b9a4b0dbad3c250600336afae611d54dcfd"
	I1206 10:29:44.301117  373801 cri.go:89] found id: "358be0ebbc23e420f6fde28e811fd30f1d4064a0e72dfb910c4e719a8d628d3b"
	I1206 10:29:44.301120  373801 cri.go:89] found id: "1c178782b46ae3df28453a2dd88fc57e38eb824abae86db11976cc74cf8b87be"
	I1206 10:29:44.301124  373801 cri.go:89] found id: "4f2a86e87c1bf385e11b164e78ea4f4e9844b0534c9bec2d841dfb406fec8a56"
	I1206 10:29:44.301127  373801 cri.go:89] found id: "410f38934f188529387872c7a0345e42f47f3295f320a1765aa24e1b9a271d4d"
	I1206 10:29:44.301130  373801 cri.go:89] found id: "69ffc3958d44bd262b1360fdb7c52481a97a7e588cd4d05224b3704341139dd0"
	I1206 10:29:44.301135  373801 cri.go:89] found id: "66618904a8d73226678429bf63c1faac7f76d45b9de953c282d294fedfc2cfb6"
	I1206 10:29:44.301141  373801 cri.go:89] found id: "9717574a8255200f8dddcf7a2550e63bdb6b4bb664ec25aeb8635f9277183f01"
	I1206 10:29:44.301144  373801 cri.go:89] found id: "6bfecd83e062db176f5124191f88157b58c2a91ba34d40d6c82c9fbd3c6fee47"
	I1206 10:29:44.301147  373801 cri.go:89] found id: ""
	I1206 10:29:44.301198  373801 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 10:29:44.316360  373801 out.go:203] 
	W1206 10:29:44.319236  373801 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:29:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:29:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 10:29:44.319261  373801 out.go:285] * 
	* 
	W1206 10:29:44.324546  373801 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:29:44.327531  373801 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-545880 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.28s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-gcfw2" [eea2d9df-b68a-44a7-9791-1ddad31022be] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.00343706s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-545880 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-545880 addons disable yakd --alsologtostderr -v=1: exit status 11 (289.212866ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:29:49.395718  373863 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:29:49.397643  373863 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:29:49.397658  373863 out.go:374] Setting ErrFile to fd 2...
	I1206 10:29:49.397663  373863 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:29:49.397947  373863 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 10:29:49.398292  373863 mustload.go:66] Loading cluster: addons-545880
	I1206 10:29:49.407465  373863 config.go:182] Loaded profile config "addons-545880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:29:49.407502  373863 addons.go:622] checking whether the cluster is paused
	I1206 10:29:49.407651  373863 config.go:182] Loaded profile config "addons-545880": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:29:49.407672  373863 host.go:66] Checking if "addons-545880" exists ...
	I1206 10:29:49.408201  373863 cli_runner.go:164] Run: docker container inspect addons-545880 --format={{.State.Status}}
	I1206 10:29:49.441063  373863 ssh_runner.go:195] Run: systemctl --version
	I1206 10:29:49.441121  373863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-545880
	I1206 10:29:49.459498  373863 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/addons-545880/id_rsa Username:docker}
	I1206 10:29:49.566230  373863 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:29:49.566328  373863 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:29:49.597352  373863 cri.go:89] found id: "76e109752916eb227cc2778fc40189f2225fe99abbb5caa1dc492604fa63b088"
	I1206 10:29:49.597380  373863 cri.go:89] found id: "c3f6082a7a0c7c8725c19d46cd708aeb5d4126a349db5fe93809b3ef79169052"
	I1206 10:29:49.597387  373863 cri.go:89] found id: "d59b55d8c46bd316322b59f76bbed7bf1ba7ae09f22a8d7446896bb650747b97"
	I1206 10:29:49.597394  373863 cri.go:89] found id: "ee27785571f1406c526ba554d46111adfc871bf6b5094f993b79d922ed4e4e88"
	I1206 10:29:49.597397  373863 cri.go:89] found id: "b58456cd2cfa54ef5616f519f55a6b7b272d08f96ca019bf4d2f47f9dc581de3"
	I1206 10:29:49.597401  373863 cri.go:89] found id: "77b77d1ecb28a6271e776faf9148345a91cf28a8eb40f9adc7343e6d90864f3a"
	I1206 10:29:49.597404  373863 cri.go:89] found id: "0bb771e3965c7313e7a976270ee1cf4f72f901f19cf787e7ef330577f83ca8b0"
	I1206 10:29:49.597407  373863 cri.go:89] found id: "82475061c71650dc2d5ef1c1b6fb59dc1e8d85ff79c3598c514ad231134b1d1a"
	I1206 10:29:49.597410  373863 cri.go:89] found id: "52d954765a231dbdcd394aa043b7231f3b45f20db74ede3718de67caabeea5a3"
	I1206 10:29:49.597437  373863 cri.go:89] found id: "e292596ad2f80045ad3b706145d35d90657c46cc5300b047c28f357a09003684"
	I1206 10:29:49.597445  373863 cri.go:89] found id: "ab9b79c2c68c1be8095a1a81cd7d444d52723042c6629740074d930656007cfd"
	I1206 10:29:49.597448  373863 cri.go:89] found id: "eaaabe40faa63af2c6b5e0ffb01fdbff88ff53227bb4a4b884fca2db86a16b38"
	I1206 10:29:49.597451  373863 cri.go:89] found id: "aea66f37874913be4b5420f3d08acfb0b6388ccfb25c63270ce6741cf675ba44"
	I1206 10:29:49.597455  373863 cri.go:89] found id: "bbd2d73693ff14927141ea51103bb4d99dce673d1531632ca460362ab91bc129"
	I1206 10:29:49.597458  373863 cri.go:89] found id: "778b08b9b628cb82a3c8742868fe4b9a4b0dbad3c250600336afae611d54dcfd"
	I1206 10:29:49.597470  373863 cri.go:89] found id: "358be0ebbc23e420f6fde28e811fd30f1d4064a0e72dfb910c4e719a8d628d3b"
	I1206 10:29:49.597480  373863 cri.go:89] found id: "1c178782b46ae3df28453a2dd88fc57e38eb824abae86db11976cc74cf8b87be"
	I1206 10:29:49.597488  373863 cri.go:89] found id: "4f2a86e87c1bf385e11b164e78ea4f4e9844b0534c9bec2d841dfb406fec8a56"
	I1206 10:29:49.597491  373863 cri.go:89] found id: "410f38934f188529387872c7a0345e42f47f3295f320a1765aa24e1b9a271d4d"
	I1206 10:29:49.597494  373863 cri.go:89] found id: "69ffc3958d44bd262b1360fdb7c52481a97a7e588cd4d05224b3704341139dd0"
	I1206 10:29:49.597510  373863 cri.go:89] found id: "66618904a8d73226678429bf63c1faac7f76d45b9de953c282d294fedfc2cfb6"
	I1206 10:29:49.597516  373863 cri.go:89] found id: "9717574a8255200f8dddcf7a2550e63bdb6b4bb664ec25aeb8635f9277183f01"
	I1206 10:29:49.597520  373863 cri.go:89] found id: "6bfecd83e062db176f5124191f88157b58c2a91ba34d40d6c82c9fbd3c6fee47"
	I1206 10:29:49.597523  373863 cri.go:89] found id: ""
	I1206 10:29:49.597591  373863 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 10:29:49.612840  373863 out.go:203] 
	W1206 10:29:49.615656  373863 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:29:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:29:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 10:29:49.615684  373863 out.go:285] * 
	* 
	W1206 10:29:49.620945  373863 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:29:49.623961  373863 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-545880 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (501.74s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-196950 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1206 10:38:41.816484  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:39:09.523552  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:40:45.366292  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-205266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:40:45.373689  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-205266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:40:45.385339  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-205266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:40:45.406877  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-205266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:40:45.448497  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-205266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:40:45.530096  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-205266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:40:45.691762  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-205266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:40:46.013569  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-205266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:40:46.655721  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-205266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:40:47.937417  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-205266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:40:50.500398  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-205266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:40:55.622325  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-205266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:41:05.863812  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-205266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:41:26.345360  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-205266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:42:07.308435  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-205266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:43:29.232681  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-205266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:43:41.814940  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-196950 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m19.673904812s)

                                                
                                                
-- stdout --
	* [functional-196950] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-362985/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-362985/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-196950" primary control-plane node in "functional-196950" cluster
	* Pulling base image v0.0.48-1764843390-22032 ...
	* Found network options:
	  - HTTP_PROXY=localhost:34477
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:34477 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-196950 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-196950 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001256315s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000243455s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000243455s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-196950 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-196950
helpers_test.go:243: (dbg) docker inspect functional-196950:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1",
	        "Created": "2025-12-06T10:36:45.201779678Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 393848,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T10:36:45.318229053Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/hostname",
	        "HostsPath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/hosts",
	        "LogPath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1-json.log",
	        "Name": "/functional-196950",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-196950:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-196950",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1",
	                "LowerDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1-init/diff:/var/lib/docker/overlay2/5011226d55616c9977b14c1fe617d1302fe59373df05ce8ec6e21b79143a1c57/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-196950",
	                "Source": "/var/lib/docker/volumes/functional-196950/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-196950",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-196950",
	                "name.minikube.sigs.k8s.io": "functional-196950",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9b8f961d55d7529aed7b841f2ac9f818c22ff12b8ad73f2d6bcee22656d9749a",
	            "SandboxKey": "/var/run/docker/netns/9b8f961d55d7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33158"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-196950": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:c1:40:2a:93:47",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a566bfdfd33a868cf61e5b18b36cbd55e9868f24cbb091e055ae606aeb8c6f03",
	                    "EndpointID": "452fe32bde0c42c4c35d700488ae93aeecc6c6a971ac6f1a8a492dbc4b328ed9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-196950",
	                        "d150aac7296d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-196950 -n functional-196950
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-196950 -n functional-196950: exit status 6 (366.841556ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 10:44:59.996436  398998 status.go:458] kubeconfig endpoint: get endpoint: "functional-196950" does not appear in /home/jenkins/minikube-integration/22047-362985/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-196950 logs -n 25: (1.290368563s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-205266 ssh sudo cat /etc/ssl/certs/364855.pem                                                                                                  │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image          │ functional-205266 image ls                                                                                                                                │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh            │ functional-205266 ssh sudo cat /usr/share/ca-certificates/364855.pem                                                                                      │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image          │ functional-205266 image save kicbase/echo-server:functional-205266 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh            │ functional-205266 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image          │ functional-205266 image rm kicbase/echo-server:functional-205266 --alsologtostderr                                                                        │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh            │ functional-205266 ssh sudo cat /etc/ssl/certs/3648552.pem                                                                                                 │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image          │ functional-205266 image ls                                                                                                                                │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh            │ functional-205266 ssh sudo cat /usr/share/ca-certificates/3648552.pem                                                                                     │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image          │ functional-205266 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh            │ functional-205266 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image          │ functional-205266 image ls                                                                                                                                │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image          │ functional-205266 image save --daemon kicbase/echo-server:functional-205266 --alsologtostderr                                                             │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ update-context │ functional-205266 update-context --alsologtostderr -v=2                                                                                                   │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ update-context │ functional-205266 update-context --alsologtostderr -v=2                                                                                                   │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ update-context │ functional-205266 update-context --alsologtostderr -v=2                                                                                                   │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image          │ functional-205266 image ls --format short --alsologtostderr                                                                                               │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image          │ functional-205266 image ls --format yaml --alsologtostderr                                                                                                │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh            │ functional-205266 ssh pgrep buildkitd                                                                                                                     │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │                     │
	│ image          │ functional-205266 image ls --format json --alsologtostderr                                                                                                │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image          │ functional-205266 image build -t localhost/my-image:functional-205266 testdata/build --alsologtostderr                                                    │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image          │ functional-205266 image ls --format table --alsologtostderr                                                                                               │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image          │ functional-205266 image ls                                                                                                                                │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ delete         │ -p functional-205266                                                                                                                                      │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ start          │ -p functional-196950 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0         │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:36:40
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:36:40.084322  393441 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:36:40.084440  393441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:36:40.084444  393441 out.go:374] Setting ErrFile to fd 2...
	I1206 10:36:40.084448  393441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:36:40.084719  393441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 10:36:40.085167  393441 out.go:368] Setting JSON to false
	I1206 10:36:40.086103  393441 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8351,"bootTime":1765009049,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 10:36:40.086172  393441 start.go:143] virtualization:  
	I1206 10:36:40.090876  393441 out.go:179] * [functional-196950] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:36:40.095788  393441 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 10:36:40.095913  393441 notify.go:221] Checking for updates...
	I1206 10:36:40.103365  393441 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:36:40.106916  393441 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 10:36:40.110263  393441 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-362985/.minikube
	I1206 10:36:40.113478  393441 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:36:40.116751  393441 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:36:40.120273  393441 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:36:40.153173  393441 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:36:40.153309  393441 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:36:40.214475  393441 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-06 10:36:40.204705682 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:36:40.214570  393441 docker.go:319] overlay module found
	I1206 10:36:40.217954  393441 out.go:179] * Using the docker driver based on user configuration
	I1206 10:36:40.221167  393441 start.go:309] selected driver: docker
	I1206 10:36:40.221185  393441 start.go:927] validating driver "docker" against <nil>
	I1206 10:36:40.221223  393441 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:36:40.222001  393441 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:36:40.281008  393441 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-06 10:36:40.271866661 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:36:40.281184  393441 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 10:36:40.281413  393441 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 10:36:40.284457  393441 out.go:179] * Using Docker driver with root privileges
	I1206 10:36:40.287511  393441 cni.go:84] Creating CNI manager for ""
	I1206 10:36:40.287582  393441 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:36:40.287589  393441 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 10:36:40.287670  393441 start.go:353] cluster config:
	{Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:36:40.290867  393441 out.go:179] * Starting "functional-196950" primary control-plane node in "functional-196950" cluster
	I1206 10:36:40.293695  393441 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 10:36:40.296731  393441 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:36:40.299692  393441 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 10:36:40.299735  393441 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1206 10:36:40.299752  393441 cache.go:65] Caching tarball of preloaded images
	I1206 10:36:40.299850  393441 preload.go:238] Found /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1206 10:36:40.299869  393441 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1206 10:36:40.300272  393441 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/config.json ...
	I1206 10:36:40.300301  393441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/config.json: {Name:mk09523b77aed32805a923bddb8d6fbabfc72972 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:36:40.300490  393441 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:36:40.321538  393441 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 10:36:40.321551  393441 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 10:36:40.321566  393441 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:36:40.321599  393441 start.go:360] acquireMachinesLock for functional-196950: {Name:mkd2471f275d1d2a438cb4ce89f1d1521a0fb340 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:36:40.321701  393441 start.go:364] duration metric: took 87.46µs to acquireMachinesLock for "functional-196950"
	I1206 10:36:40.321725  393441 start.go:93] Provisioning new machine with config: &{Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 10:36:40.321790  393441 start.go:125] createHost starting for "" (driver="docker")
	I1206 10:36:40.325270  393441 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1206 10:36:40.325555  393441 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:34477 to docker env.
	I1206 10:36:40.325582  393441 start.go:159] libmachine.API.Create for "functional-196950" (driver="docker")
	I1206 10:36:40.325604  393441 client.go:173] LocalClient.Create starting
	I1206 10:36:40.325665  393441 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem
	I1206 10:36:40.325700  393441 main.go:143] libmachine: Decoding PEM data...
	I1206 10:36:40.325714  393441 main.go:143] libmachine: Parsing certificate...
	I1206 10:36:40.325771  393441 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem
	I1206 10:36:40.325788  393441 main.go:143] libmachine: Decoding PEM data...
	I1206 10:36:40.325803  393441 main.go:143] libmachine: Parsing certificate...
	I1206 10:36:40.326179  393441 cli_runner.go:164] Run: docker network inspect functional-196950 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 10:36:40.345309  393441 cli_runner.go:211] docker network inspect functional-196950 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 10:36:40.345390  393441 network_create.go:284] running [docker network inspect functional-196950] to gather additional debugging logs...
	I1206 10:36:40.345406  393441 cli_runner.go:164] Run: docker network inspect functional-196950
	W1206 10:36:40.361376  393441 cli_runner.go:211] docker network inspect functional-196950 returned with exit code 1
	I1206 10:36:40.361396  393441 network_create.go:287] error running [docker network inspect functional-196950]: docker network inspect functional-196950: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-196950 not found
	I1206 10:36:40.361409  393441 network_create.go:289] output of [docker network inspect functional-196950]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-196950 not found
	
	** /stderr **
	I1206 10:36:40.361529  393441 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:36:40.381134  393441 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400197e100}
	I1206 10:36:40.381162  393441 network_create.go:124] attempt to create docker network functional-196950 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1206 10:36:40.381225  393441 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-196950 functional-196950
	I1206 10:36:40.444938  393441 network_create.go:108] docker network functional-196950 192.168.49.0/24 created
	I1206 10:36:40.444963  393441 kic.go:121] calculated static IP "192.168.49.2" for the "functional-196950" container
	I1206 10:36:40.445062  393441 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 10:36:40.460006  393441 cli_runner.go:164] Run: docker volume create functional-196950 --label name.minikube.sigs.k8s.io=functional-196950 --label created_by.minikube.sigs.k8s.io=true
	I1206 10:36:40.478257  393441 oci.go:103] Successfully created a docker volume functional-196950
	I1206 10:36:40.478347  393441 cli_runner.go:164] Run: docker run --rm --name functional-196950-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-196950 --entrypoint /usr/bin/test -v functional-196950:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 10:36:41.044309  393441 oci.go:107] Successfully prepared a docker volume functional-196950
	I1206 10:36:41.044356  393441 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 10:36:41.044365  393441 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 10:36:41.044435  393441 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v functional-196950:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 10:36:45.054086  393441 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v functional-196950:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (4.009605324s)
	I1206 10:36:45.054110  393441 kic.go:203] duration metric: took 4.009741457s to extract preloaded images to volume ...
	W1206 10:36:45.054285  393441 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1206 10:36:45.054408  393441 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 10:36:45.150699  393441 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-196950 --name functional-196950 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-196950 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-196950 --network functional-196950 --ip 192.168.49.2 --volume functional-196950:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1206 10:36:45.577731  393441 cli_runner.go:164] Run: docker container inspect functional-196950 --format={{.State.Running}}
	I1206 10:36:45.600239  393441 cli_runner.go:164] Run: docker container inspect functional-196950 --format={{.State.Status}}
	I1206 10:36:45.626277  393441 cli_runner.go:164] Run: docker exec functional-196950 stat /var/lib/dpkg/alternatives/iptables
	I1206 10:36:45.683992  393441 oci.go:144] the created container "functional-196950" has a running status.
	I1206 10:36:45.684011  393441 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa...
	I1206 10:36:45.890386  393441 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 10:36:45.919690  393441 cli_runner.go:164] Run: docker container inspect functional-196950 --format={{.State.Status}}
	I1206 10:36:45.953597  393441 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 10:36:45.953608  393441 kic_runner.go:114] Args: [docker exec --privileged functional-196950 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 10:36:46.014256  393441 cli_runner.go:164] Run: docker container inspect functional-196950 --format={{.State.Status}}
	I1206 10:36:46.043455  393441 machine.go:94] provisionDockerMachine start ...
	I1206 10:36:46.043675  393441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:36:46.070095  393441 main.go:143] libmachine: Using SSH client type: native
	I1206 10:36:46.070454  393441 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1206 10:36:46.070465  393441 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:36:46.071095  393441 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41036->127.0.0.1:33158: read: connection reset by peer
	I1206 10:36:49.223078  393441 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-196950
	
	I1206 10:36:49.223091  393441 ubuntu.go:182] provisioning hostname "functional-196950"
	I1206 10:36:49.223173  393441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:36:49.242317  393441 main.go:143] libmachine: Using SSH client type: native
	I1206 10:36:49.242624  393441 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1206 10:36:49.242635  393441 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-196950 && echo "functional-196950" | sudo tee /etc/hostname
	I1206 10:36:49.404581  393441 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-196950
	
	I1206 10:36:49.404670  393441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:36:49.423030  393441 main.go:143] libmachine: Using SSH client type: native
	I1206 10:36:49.423342  393441 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1206 10:36:49.423364  393441 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-196950' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-196950/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-196950' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:36:49.575928  393441 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:36:49.575944  393441 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-362985/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-362985/.minikube}
	I1206 10:36:49.575962  393441 ubuntu.go:190] setting up certificates
	I1206 10:36:49.575969  393441 provision.go:84] configureAuth start
	I1206 10:36:49.576031  393441 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-196950
	I1206 10:36:49.594106  393441 provision.go:143] copyHostCerts
	I1206 10:36:49.594182  393441 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem, removing ...
	I1206 10:36:49.594190  393441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem
	I1206 10:36:49.594265  393441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem (1082 bytes)
	I1206 10:36:49.594354  393441 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem, removing ...
	I1206 10:36:49.594358  393441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem
	I1206 10:36:49.594381  393441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem (1123 bytes)
	I1206 10:36:49.594429  393441 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem, removing ...
	I1206 10:36:49.594432  393441 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem
	I1206 10:36:49.594453  393441 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem (1679 bytes)
	I1206 10:36:49.594496  393441 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem org=jenkins.functional-196950 san=[127.0.0.1 192.168.49.2 functional-196950 localhost minikube]
	I1206 10:36:49.796142  393441 provision.go:177] copyRemoteCerts
	I1206 10:36:49.796205  393441 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:36:49.796244  393441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:36:49.814115  393441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:36:49.919761  393441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 10:36:49.938042  393441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 10:36:49.956253  393441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 10:36:49.974320  393441 provision.go:87] duration metric: took 398.327622ms to configureAuth
	I1206 10:36:49.974347  393441 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:36:49.974537  393441 config.go:182] Loaded profile config "functional-196950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:36:49.974635  393441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:36:49.992873  393441 main.go:143] libmachine: Using SSH client type: native
	I1206 10:36:49.993195  393441 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1206 10:36:49.993207  393441 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 10:36:50.311473  393441 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 10:36:50.311487  393441 machine.go:97] duration metric: took 4.268019556s to provisionDockerMachine
	I1206 10:36:50.311496  393441 client.go:176] duration metric: took 9.985888292s to LocalClient.Create
	I1206 10:36:50.311515  393441 start.go:167] duration metric: took 9.985933856s to libmachine.API.Create "functional-196950"
	I1206 10:36:50.311521  393441 start.go:293] postStartSetup for "functional-196950" (driver="docker")
	I1206 10:36:50.311532  393441 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:36:50.311592  393441 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:36:50.311635  393441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:36:50.329060  393441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:36:50.435256  393441 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:36:50.438410  393441 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:36:50.438428  393441 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:36:50.438438  393441 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-362985/.minikube/addons for local assets ...
	I1206 10:36:50.438493  393441 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-362985/.minikube/files for local assets ...
	I1206 10:36:50.438589  393441 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem -> 3648552.pem in /etc/ssl/certs
	I1206 10:36:50.438683  393441 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/test/nested/copy/364855/hosts -> hosts in /etc/test/nested/copy/364855
	I1206 10:36:50.438734  393441 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/364855
	I1206 10:36:50.446315  393441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem --> /etc/ssl/certs/3648552.pem (1708 bytes)
	I1206 10:36:50.464354  393441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/test/nested/copy/364855/hosts --> /etc/test/nested/copy/364855/hosts (40 bytes)
	I1206 10:36:50.482242  393441 start.go:296] duration metric: took 170.707074ms for postStartSetup
	I1206 10:36:50.482613  393441 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-196950
	I1206 10:36:50.500073  393441 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/config.json ...
	I1206 10:36:50.500358  393441 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:36:50.500398  393441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:36:50.518730  393441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:36:50.620343  393441 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:36:50.625272  393441 start.go:128] duration metric: took 10.303467479s to createHost
	I1206 10:36:50.625287  393441 start.go:83] releasing machines lock for "functional-196950", held for 10.30357962s
	I1206 10:36:50.625357  393441 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-196950
	I1206 10:36:50.646736  393441 out.go:179] * Found network options:
	I1206 10:36:50.649768  393441 out.go:179]   - HTTP_PROXY=localhost:34477
	W1206 10:36:50.652692  393441 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1206 10:36:50.655649  393441 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1206 10:36:50.658635  393441 ssh_runner.go:195] Run: cat /version.json
	I1206 10:36:50.658688  393441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:36:50.658708  393441 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:36:50.658775  393441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:36:50.677229  393441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:36:50.679760  393441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:36:50.876454  393441 ssh_runner.go:195] Run: systemctl --version
	I1206 10:36:50.883695  393441 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 10:36:50.919332  393441 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 10:36:50.923938  393441 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:36:50.924016  393441 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:36:50.952702  393441 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1206 10:36:50.952716  393441 start.go:496] detecting cgroup driver to use...
	I1206 10:36:50.952750  393441 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:36:50.952800  393441 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 10:36:50.970918  393441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 10:36:50.983734  393441 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:36:50.983790  393441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:36:51.001514  393441 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:36:51.024207  393441 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:36:51.151163  393441 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:36:51.289369  393441 docker.go:234] disabling docker service ...
	I1206 10:36:51.289424  393441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:36:51.316993  393441 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:36:51.330488  393441 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:36:51.450793  393441 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:36:51.564960  393441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:36:51.578100  393441 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:36:51.597797  393441 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 10:36:51.597860  393441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:36:51.607450  393441 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 10:36:51.607513  393441 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:36:51.616797  393441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:36:51.625436  393441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:36:51.634444  393441 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:36:51.642433  393441 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:36:51.651034  393441 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:36:51.664440  393441 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:36:51.673376  393441 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:36:51.681584  393441 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:36:51.689136  393441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:36:51.796182  393441 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 10:36:51.966753  393441 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 10:36:51.966815  393441 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 10:36:51.970719  393441 start.go:564] Will wait 60s for crictl version
	I1206 10:36:51.970791  393441 ssh_runner.go:195] Run: which crictl
	I1206 10:36:51.974303  393441 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:36:52.008596  393441 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 10:36:52.008694  393441 ssh_runner.go:195] Run: crio --version
	I1206 10:36:52.041278  393441 ssh_runner.go:195] Run: crio --version
	I1206 10:36:52.074523  393441 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1206 10:36:52.077388  393441 cli_runner.go:164] Run: docker network inspect functional-196950 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:36:52.094708  393441 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 10:36:52.099123  393441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 10:36:52.111440  393441 kubeadm.go:884] updating cluster {Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:36:52.111547  393441 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 10:36:52.111603  393441 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:36:52.146167  393441 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:36:52.146193  393441 crio.go:433] Images already preloaded, skipping extraction
	I1206 10:36:52.146249  393441 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:36:52.172478  393441 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:36:52.172491  393441 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:36:52.172497  393441 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1206 10:36:52.172590  393441 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-196950 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:36:52.172674  393441 ssh_runner.go:195] Run: crio config
	I1206 10:36:52.227415  393441 cni.go:84] Creating CNI manager for ""
	I1206 10:36:52.227424  393441 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:36:52.227444  393441 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 10:36:52.227466  393441 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-196950 NodeName:functional-196950 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:36:52.227592  393441 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-196950"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:36:52.227665  393441 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 10:36:52.235453  393441 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:36:52.235530  393441 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:36:52.243232  393441 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1206 10:36:52.256593  393441 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 10:36:52.269638  393441 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1206 10:36:52.285041  393441 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:36:52.288699  393441 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 10:36:52.298739  393441 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:36:52.413114  393441 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:36:52.429501  393441 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950 for IP: 192.168.49.2
	I1206 10:36:52.429527  393441 certs.go:195] generating shared ca certs ...
	I1206 10:36:52.429542  393441 certs.go:227] acquiring lock for ca certs: {Name:mke2ec61a37b6f3abbcbeb9abd23d6a19d011dd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:36:52.429703  393441 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.key
	I1206 10:36:52.429757  393441 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.key
	I1206 10:36:52.429764  393441 certs.go:257] generating profile certs ...
	I1206 10:36:52.429826  393441 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.key
	I1206 10:36:52.429837  393441 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.crt with IP's: []
	I1206 10:36:52.489387  393441 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.crt ...
	I1206 10:36:52.489405  393441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.crt: {Name:mkd60d2feabb17077c640ff0c1061e449dc8ab67 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:36:52.489601  393441 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.key ...
	I1206 10:36:52.489609  393441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.key: {Name:mk9168d8b2aad428c9502ee42c72b5aa3c2be0de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:36:52.489701  393441 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.key.a77b39a6
	I1206 10:36:52.489712  393441 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.crt.a77b39a6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1206 10:36:52.995895  393441 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.crt.a77b39a6 ...
	I1206 10:36:52.995913  393441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.crt.a77b39a6: {Name:mk5a1a22bf5e145efe9ffee706434d2fd910991f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:36:52.996103  393441 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.key.a77b39a6 ...
	I1206 10:36:52.996112  393441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.key.a77b39a6: {Name:mke52694d018ba307c9f495ea62477da6232cf91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:36:52.996197  393441 certs.go:382] copying /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.crt.a77b39a6 -> /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.crt
	I1206 10:36:52.996269  393441 certs.go:386] copying /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.key.a77b39a6 -> /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.key
	I1206 10:36:52.996320  393441 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.key
	I1206 10:36:52.996331  393441 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.crt with IP's: []
	I1206 10:36:53.098830  393441 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.crt ...
	I1206 10:36:53.098845  393441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.crt: {Name:mkee0682b76f9b307794e3edffad2616470c8c95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:36:53.099036  393441 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.key ...
	I1206 10:36:53.099045  393441 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.key: {Name:mka063d68c820b1a0486c5eb83044a91eac96008 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:36:53.099241  393441 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855.pem (1338 bytes)
	W1206 10:36:53.099282  393441 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855_empty.pem, impossibly tiny 0 bytes
	I1206 10:36:53.099291  393441 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 10:36:53.099322  393441 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem (1082 bytes)
	I1206 10:36:53.099346  393441 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:36:53.099396  393441 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem (1679 bytes)
	I1206 10:36:53.099444  393441 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem (1708 bytes)
	I1206 10:36:53.100016  393441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:36:53.119245  393441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 10:36:53.138327  393441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:36:53.156512  393441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 10:36:53.175557  393441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 10:36:53.193881  393441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 10:36:53.212204  393441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:36:53.234096  393441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 10:36:53.255613  393441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855.pem --> /usr/share/ca-certificates/364855.pem (1338 bytes)
	I1206 10:36:53.279321  393441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem --> /usr/share/ca-certificates/3648552.pem (1708 bytes)
	I1206 10:36:53.299242  393441 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:36:53.318516  393441 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:36:53.331576  393441 ssh_runner.go:195] Run: openssl version
	I1206 10:36:53.338322  393441 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364855.pem
	I1206 10:36:53.345700  393441 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364855.pem /etc/ssl/certs/364855.pem
	I1206 10:36:53.353271  393441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364855.pem
	I1206 10:36:53.357092  393441 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:36 /usr/share/ca-certificates/364855.pem
	I1206 10:36:53.357149  393441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364855.pem
	I1206 10:36:53.398076  393441 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 10:36:53.405722  393441 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/364855.pem /etc/ssl/certs/51391683.0
	I1206 10:36:53.413215  393441 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648552.pem
	I1206 10:36:53.420667  393441 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648552.pem /etc/ssl/certs/3648552.pem
	I1206 10:36:53.428131  393441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648552.pem
	I1206 10:36:53.432166  393441 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:36 /usr/share/ca-certificates/3648552.pem
	I1206 10:36:53.432225  393441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648552.pem
	I1206 10:36:53.473203  393441 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 10:36:53.480939  393441 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3648552.pem /etc/ssl/certs/3ec20f2e.0
	I1206 10:36:53.488537  393441 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:36:53.496101  393441 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:36:53.503631  393441 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:36:53.507483  393441 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:26 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:36:53.507541  393441 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:36:53.550245  393441 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:36:53.558052  393441 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 10:36:53.565677  393441 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:36:53.569205  393441 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 10:36:53.569245  393441 kubeadm.go:401] StartCluster: {Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:36:53.569309  393441 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:36:53.569364  393441 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:36:53.598161  393441 cri.go:89] found id: ""
	I1206 10:36:53.598222  393441 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:36:53.606029  393441 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 10:36:53.613833  393441 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:36:53.613887  393441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:36:53.622184  393441 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:36:53.622193  393441 kubeadm.go:158] found existing configuration files:
	
	I1206 10:36:53.622260  393441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:36:53.630212  393441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:36:53.630278  393441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:36:53.637730  393441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:36:53.645384  393441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:36:53.645440  393441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:36:53.652848  393441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:36:53.660497  393441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:36:53.660558  393441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:36:53.668158  393441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:36:53.676285  393441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:36:53.676346  393441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:36:53.683838  393441 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:36:53.730762  393441 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 10:36:53.730991  393441 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:36:53.801601  393441 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:36:53.801665  393441 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:36:53.801702  393441 kubeadm.go:319] OS: Linux
	I1206 10:36:53.801746  393441 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:36:53.801794  393441 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:36:53.801840  393441 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:36:53.801888  393441 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:36:53.801941  393441 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:36:53.801987  393441 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:36:53.802040  393441 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:36:53.802086  393441 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:36:53.802131  393441 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:36:53.870065  393441 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:36:53.870168  393441 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:36:53.870257  393441 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:36:53.878285  393441 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:36:53.884701  393441 out.go:252]   - Generating certificates and keys ...
	I1206 10:36:53.884797  393441 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:36:53.884860  393441 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:36:53.920065  393441 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 10:36:54.121417  393441 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 10:36:54.350778  393441 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 10:36:54.548496  393441 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 10:36:54.698720  393441 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 10:36:54.699055  393441 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-196950 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1206 10:36:54.891523  393441 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 10:36:54.891849  393441 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-196950 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1206 10:36:55.042436  393441 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 10:36:55.362240  393441 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 10:36:55.561510  393441 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 10:36:55.561760  393441 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:36:55.807971  393441 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:36:56.112687  393441 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:36:56.406050  393441 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:36:56.727083  393441 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:36:57.123257  393441 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:36:57.123902  393441 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:36:57.126961  393441 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:36:57.130559  393441 out.go:252]   - Booting up control plane ...
	I1206 10:36:57.130693  393441 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:36:57.130774  393441 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:36:57.131802  393441 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:36:57.147906  393441 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:36:57.148172  393441 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:36:57.155440  393441 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:36:57.155782  393441 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:36:57.155974  393441 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:36:57.289815  393441 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:36:57.289927  393441 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:40:57.290706  393441 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001256315s
	I1206 10:40:57.290728  393441 kubeadm.go:319] 
	I1206 10:40:57.290784  393441 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 10:40:57.290818  393441 kubeadm.go:319] 	- The kubelet is not running
	I1206 10:40:57.290922  393441 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 10:40:57.290926  393441 kubeadm.go:319] 
	I1206 10:40:57.291029  393441 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 10:40:57.291060  393441 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 10:40:57.291099  393441 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 10:40:57.291102  393441 kubeadm.go:319] 
	I1206 10:40:57.296264  393441 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 10:40:57.296710  393441 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 10:40:57.296825  393441 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:40:57.297078  393441 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 10:40:57.297083  393441 kubeadm.go:319] 
	I1206 10:40:57.297157  393441 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1206 10:40:57.297270  393441 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-196950 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-196950 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001256315s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1206 10:40:57.297372  393441 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1206 10:40:57.716230  393441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:40:57.729720  393441 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:40:57.729779  393441 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:40:57.737811  393441 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:40:57.737823  393441 kubeadm.go:158] found existing configuration files:
	
	I1206 10:40:57.737874  393441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:40:57.745665  393441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:40:57.745719  393441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:40:57.753194  393441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:40:57.761661  393441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:40:57.761721  393441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:40:57.769863  393441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:40:57.777913  393441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:40:57.777981  393441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:40:57.785580  393441 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:40:57.794257  393441 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:40:57.794314  393441 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:40:57.802234  393441 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:40:57.842208  393441 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 10:40:57.842259  393441 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:40:57.923299  393441 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:40:57.923370  393441 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:40:57.923431  393441 kubeadm.go:319] OS: Linux
	I1206 10:40:57.923475  393441 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:40:57.923522  393441 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:40:57.923568  393441 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:40:57.923614  393441 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:40:57.923661  393441 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:40:57.923707  393441 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:40:57.923751  393441 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:40:57.923798  393441 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:40:57.923842  393441 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:40:57.992429  393441 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:40:57.992537  393441 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:40:57.992628  393441 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:40:58.007531  393441 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:40:58.011234  393441 out.go:252]   - Generating certificates and keys ...
	I1206 10:40:58.011326  393441 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:40:58.011441  393441 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:40:58.011548  393441 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 10:40:58.011610  393441 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 10:40:58.011679  393441 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 10:40:58.011731  393441 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 10:40:58.011793  393441 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 10:40:58.011853  393441 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 10:40:58.011926  393441 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 10:40:58.011998  393441 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 10:40:58.012034  393441 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 10:40:58.012090  393441 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:40:58.069264  393441 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:40:58.749222  393441 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:40:58.866683  393441 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:40:58.919968  393441 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:40:59.071129  393441 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:40:59.071958  393441 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:40:59.074739  393441 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:40:59.078115  393441 out.go:252]   - Booting up control plane ...
	I1206 10:40:59.078218  393441 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:40:59.078300  393441 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:40:59.078370  393441 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:40:59.092790  393441 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:40:59.092892  393441 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:40:59.100048  393441 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:40:59.100384  393441 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:40:59.100425  393441 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:40:59.233892  393441 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:40:59.234005  393441 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:44:59.234111  393441 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000243455s
	I1206 10:44:59.234131  393441 kubeadm.go:319] 
	I1206 10:44:59.234184  393441 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 10:44:59.234243  393441 kubeadm.go:319] 	- The kubelet is not running
	I1206 10:44:59.234342  393441 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 10:44:59.234346  393441 kubeadm.go:319] 
	I1206 10:44:59.234443  393441 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 10:44:59.234517  393441 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 10:44:59.234559  393441 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 10:44:59.234565  393441 kubeadm.go:319] 
	I1206 10:44:59.238906  393441 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 10:44:59.239326  393441 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 10:44:59.239445  393441 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:44:59.239706  393441 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1206 10:44:59.239711  393441 kubeadm.go:319] 
	I1206 10:44:59.239778  393441 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 10:44:59.239831  393441 kubeadm.go:403] duration metric: took 8m5.670588626s to StartCluster
	I1206 10:44:59.239864  393441 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:44:59.239926  393441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:44:59.265770  393441 cri.go:89] found id: ""
	I1206 10:44:59.265785  393441 logs.go:282] 0 containers: []
	W1206 10:44:59.265792  393441 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:44:59.265798  393441 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:44:59.265866  393441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:44:59.291965  393441 cri.go:89] found id: ""
	I1206 10:44:59.291979  393441 logs.go:282] 0 containers: []
	W1206 10:44:59.291987  393441 logs.go:284] No container was found matching "etcd"
	I1206 10:44:59.291991  393441 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:44:59.292049  393441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:44:59.321569  393441 cri.go:89] found id: ""
	I1206 10:44:59.321584  393441 logs.go:282] 0 containers: []
	W1206 10:44:59.321591  393441 logs.go:284] No container was found matching "coredns"
	I1206 10:44:59.321596  393441 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:44:59.321655  393441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:44:59.350535  393441 cri.go:89] found id: ""
	I1206 10:44:59.350549  393441 logs.go:282] 0 containers: []
	W1206 10:44:59.350555  393441 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:44:59.350560  393441 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:44:59.350625  393441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:44:59.376107  393441 cri.go:89] found id: ""
	I1206 10:44:59.376128  393441 logs.go:282] 0 containers: []
	W1206 10:44:59.376135  393441 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:44:59.376141  393441 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:44:59.376199  393441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:44:59.404566  393441 cri.go:89] found id: ""
	I1206 10:44:59.404580  393441 logs.go:282] 0 containers: []
	W1206 10:44:59.404588  393441 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:44:59.404593  393441 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:44:59.404653  393441 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:44:59.434647  393441 cri.go:89] found id: ""
	I1206 10:44:59.434660  393441 logs.go:282] 0 containers: []
	W1206 10:44:59.434668  393441 logs.go:284] No container was found matching "kindnet"
	I1206 10:44:59.434676  393441 logs.go:123] Gathering logs for container status ...
	I1206 10:44:59.434686  393441 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:44:59.464494  393441 logs.go:123] Gathering logs for kubelet ...
	I1206 10:44:59.464510  393441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:44:59.534409  393441 logs.go:123] Gathering logs for dmesg ...
	I1206 10:44:59.534429  393441 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:44:59.550246  393441 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:44:59.550263  393441 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:44:59.617027  393441 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:44:59.608042    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:44:59.608915    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:44:59.610588    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:44:59.611029    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:44:59.612542    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:44:59.608042    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:44:59.608915    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:44:59.610588    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:44:59.611029    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:44:59.612542    4839 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:44:59.617039  393441 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:44:59.617050  393441 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1206 10:44:59.647827  393441 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000243455s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 10:44:59.647876  393441 out.go:285] * 
	W1206 10:44:59.647986  393441 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000243455s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 10:44:59.648039  393441 out.go:285] * 
	W1206 10:44:59.650177  393441 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:44:59.655718  393441 out.go:203] 
	W1206 10:44:59.659518  393441 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000243455s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 10:44:59.659571  393441 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 10:44:59.659593  393441 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 10:44:59.663343  393441 out.go:203] 
	
	
	==> CRI-O <==
	Dec 06 10:36:51 functional-196950 crio[842]: time="2025-12-06T10:36:51.960390017Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 06 10:36:51 functional-196950 crio[842]: time="2025-12-06T10:36:51.960428836Z" level=info msg="Starting seccomp notifier watcher"
	Dec 06 10:36:51 functional-196950 crio[842]: time="2025-12-06T10:36:51.960477748Z" level=info msg="Create NRI interface"
	Dec 06 10:36:51 functional-196950 crio[842]: time="2025-12-06T10:36:51.960580206Z" level=info msg="built-in NRI default validator is disabled"
	Dec 06 10:36:51 functional-196950 crio[842]: time="2025-12-06T10:36:51.960587607Z" level=info msg="runtime interface created"
	Dec 06 10:36:51 functional-196950 crio[842]: time="2025-12-06T10:36:51.96059898Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 06 10:36:51 functional-196950 crio[842]: time="2025-12-06T10:36:51.96060593Z" level=info msg="runtime interface starting up..."
	Dec 06 10:36:51 functional-196950 crio[842]: time="2025-12-06T10:36:51.960611878Z" level=info msg="starting plugins..."
	Dec 06 10:36:51 functional-196950 crio[842]: time="2025-12-06T10:36:51.960625154Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 06 10:36:51 functional-196950 crio[842]: time="2025-12-06T10:36:51.960690534Z" level=info msg="No systemd watchdog enabled"
	Dec 06 10:36:51 functional-196950 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 06 10:36:53 functional-196950 crio[842]: time="2025-12-06T10:36:53.873754241Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=baca8aba-77f7-4679-8290-92d25e869c56 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:36:53 functional-196950 crio[842]: time="2025-12-06T10:36:53.874500293Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=8202d0cf-9758-4cd6-97e4-3a80af2fc15f name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:36:53 functional-196950 crio[842]: time="2025-12-06T10:36:53.875023565Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=54bb2f79-3871-4964-b62a-3a60c296893f name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:36:53 functional-196950 crio[842]: time="2025-12-06T10:36:53.875665492Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=4414f3da-5c96-4ea3-90c1-97774c26d113 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:36:53 functional-196950 crio[842]: time="2025-12-06T10:36:53.876157354Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=a4b45262-16fb-47e1-b8a0-7e8922a73e35 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:36:53 functional-196950 crio[842]: time="2025-12-06T10:36:53.876712577Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=d3d2b42b-1523-4392-821c-56f1494cdb00 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:36:53 functional-196950 crio[842]: time="2025-12-06T10:36:53.877302845Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=19ae5b22-7b9e-4b99-b480-18a427f6fcdd name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:40:57 functional-196950 crio[842]: time="2025-12-06T10:40:57.995849978Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=5f99ac95-baa8-4f84-b389-587f72629c4a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:40:57 functional-196950 crio[842]: time="2025-12-06T10:40:57.996717384Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=0b1abc3f-c397-40f0-bf2f-20bcc39d026a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:40:57 functional-196950 crio[842]: time="2025-12-06T10:40:57.99758671Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=8163fed6-eac4-4809-a84a-a98aa80a371b name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:40:57 functional-196950 crio[842]: time="2025-12-06T10:40:57.998230902Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=51a8ee8c-eacd-4799-bc86-050cde63ab04 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:40:57 functional-196950 crio[842]: time="2025-12-06T10:40:57.99877435Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=8c5a2cc7-2130-4109-8954-30e07e82a18a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:40:57 functional-196950 crio[842]: time="2025-12-06T10:40:57.999310643Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=a2123f18-4bc8-4c4c-b204-72991601bbea name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:40:58 functional-196950 crio[842]: time="2025-12-06T10:40:58.000610455Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=933b2c94-ddd5-4070-a499-fdc8f2486553 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:45:01.246220    4944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:45:01.247071    4944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:45:01.248808    4944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:45:01.249507    4944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:45:01.251542    4944 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014752] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503231] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.065820] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.901896] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.944603] kauditd_printk_skb: 39 callbacks suppressed
	[Dec 6 09:04] hrtimer: interrupt took 32230230 ns
	[Dec 6 10:25] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 6 10:26] overlayfs: idmapped layers are currently not supported
	[  +0.066821] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 6 10:32] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:33] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 10:45:01 up  2:27,  0 user,  load average: 0.28, 0.53, 1.19
	Linux functional-196950 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 10:44:58 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:44:59 functional-196950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 06 10:44:59 functional-196950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:44:59 functional-196950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:44:59 functional-196950 kubelet[4827]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:44:59 functional-196950 kubelet[4827]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:44:59 functional-196950 kubelet[4827]: E1206 10:44:59.554496    4827 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:44:59 functional-196950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:44:59 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:45:00 functional-196950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 06 10:45:00 functional-196950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:45:00 functional-196950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:45:00 functional-196950 kubelet[4860]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:45:00 functional-196950 kubelet[4860]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:45:00 functional-196950 kubelet[4860]: E1206 10:45:00.439496    4860 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:45:00 functional-196950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:45:00 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:45:01 functional-196950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 06 10:45:01 functional-196950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:45:01 functional-196950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:45:01 functional-196950 kubelet[4949]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:45:01 functional-196950 kubelet[4949]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:45:01 functional-196950 kubelet[4949]: E1206 10:45:01.285733    4949 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:45:01 functional-196950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:45:01 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-196950 -n functional-196950
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-196950 -n functional-196950: exit status 6 (334.895894ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 10:45:01.725339  399218 status.go:458] kubeconfig endpoint: get endpoint: "functional-196950" does not appear in /home/jenkins/minikube-integration/22047-362985/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "functional-196950" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (501.74s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (369.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1206 10:45:01.742475  364855 config.go:182] Loaded profile config "functional-196950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-196950 --alsologtostderr -v=8
E1206 10:45:45.364864  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-205266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:46:13.073960  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-205266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:48:41.813639  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:50:04.885844  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:50:45.364895  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-205266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-196950 --alsologtostderr -v=8: exit status 80 (6m5.75332995s)

                                                
                                                
-- stdout --
	* [functional-196950] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-362985/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-362985/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-196950" primary control-plane node in "functional-196950" cluster
	* Pulling base image v0.0.48-1764843390-22032 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:45:01.787203  399286 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:45:01.787433  399286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:45:01.787467  399286 out.go:374] Setting ErrFile to fd 2...
	I1206 10:45:01.787489  399286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:45:01.787778  399286 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 10:45:01.788186  399286 out.go:368] Setting JSON to false
	I1206 10:45:01.789151  399286 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8853,"bootTime":1765009049,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 10:45:01.789259  399286 start.go:143] virtualization:  
	I1206 10:45:01.792729  399286 out.go:179] * [functional-196950] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:45:01.796494  399286 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 10:45:01.796574  399286 notify.go:221] Checking for updates...
	I1206 10:45:01.802323  399286 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:45:01.805290  399286 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 10:45:01.808768  399286 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-362985/.minikube
	I1206 10:45:01.811515  399286 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:45:01.814379  399286 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:45:01.817672  399286 config.go:182] Loaded profile config "functional-196950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:45:01.817798  399286 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:45:01.851887  399286 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:45:01.852009  399286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:45:01.921321  399286 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:45:01.909571102 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:45:01.921426  399286 docker.go:319] overlay module found
	I1206 10:45:01.926314  399286 out.go:179] * Using the docker driver based on existing profile
	I1206 10:45:01.929149  399286 start.go:309] selected driver: docker
	I1206 10:45:01.929174  399286 start.go:927] validating driver "docker" against &{Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:45:01.929299  399286 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:45:01.929402  399286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:45:02.005684  399286 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:45:01.991905909 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:45:02.006178  399286 cni.go:84] Creating CNI manager for ""
	I1206 10:45:02.006252  399286 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:45:02.006308  399286 start.go:353] cluster config:
	{Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:45:02.012455  399286 out.go:179] * Starting "functional-196950" primary control-plane node in "functional-196950" cluster
	I1206 10:45:02.015293  399286 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 10:45:02.018502  399286 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:45:02.021547  399286 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 10:45:02.021609  399286 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1206 10:45:02.021620  399286 cache.go:65] Caching tarball of preloaded images
	I1206 10:45:02.021746  399286 preload.go:238] Found /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1206 10:45:02.021762  399286 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1206 10:45:02.021883  399286 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/config.json ...
	I1206 10:45:02.022120  399286 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:45:02.058171  399286 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 10:45:02.058196  399286 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 10:45:02.058216  399286 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:45:02.058248  399286 start.go:360] acquireMachinesLock for functional-196950: {Name:mkd2471f275d1d2a438cb4ce89f1d1521a0fb340 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:45:02.058324  399286 start.go:364] duration metric: took 51.241µs to acquireMachinesLock for "functional-196950"
	I1206 10:45:02.058347  399286 start.go:96] Skipping create...Using existing machine configuration
	I1206 10:45:02.058352  399286 fix.go:54] fixHost starting: 
	I1206 10:45:02.058623  399286 cli_runner.go:164] Run: docker container inspect functional-196950 --format={{.State.Status}}
	I1206 10:45:02.075952  399286 fix.go:112] recreateIfNeeded on functional-196950: state=Running err=<nil>
	W1206 10:45:02.075984  399286 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 10:45:02.079219  399286 out.go:252] * Updating the running docker "functional-196950" container ...
	I1206 10:45:02.079261  399286 machine.go:94] provisionDockerMachine start ...
	I1206 10:45:02.079396  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:02.097606  399286 main.go:143] libmachine: Using SSH client type: native
	I1206 10:45:02.097945  399286 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1206 10:45:02.097963  399286 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:45:02.251117  399286 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-196950
	
	I1206 10:45:02.251145  399286 ubuntu.go:182] provisioning hostname "functional-196950"
	I1206 10:45:02.251226  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:02.270896  399286 main.go:143] libmachine: Using SSH client type: native
	I1206 10:45:02.271293  399286 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1206 10:45:02.271357  399286 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-196950 && echo "functional-196950" | sudo tee /etc/hostname
	I1206 10:45:02.434988  399286 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-196950
	
	I1206 10:45:02.435098  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:02.453713  399286 main.go:143] libmachine: Using SSH client type: native
	I1206 10:45:02.454033  399286 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1206 10:45:02.454055  399286 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-196950' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-196950/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-196950' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:45:02.607868  399286 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:45:02.607903  399286 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-362985/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-362985/.minikube}
	I1206 10:45:02.607940  399286 ubuntu.go:190] setting up certificates
	I1206 10:45:02.607949  399286 provision.go:84] configureAuth start
	I1206 10:45:02.608015  399286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-196950
	I1206 10:45:02.626134  399286 provision.go:143] copyHostCerts
	I1206 10:45:02.626186  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem
	I1206 10:45:02.626227  399286 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem, removing ...
	I1206 10:45:02.626247  399286 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem
	I1206 10:45:02.626323  399286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem (1082 bytes)
	I1206 10:45:02.626456  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem
	I1206 10:45:02.626477  399286 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem, removing ...
	I1206 10:45:02.626487  399286 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem
	I1206 10:45:02.626523  399286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem (1123 bytes)
	I1206 10:45:02.626584  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem
	I1206 10:45:02.626607  399286 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem, removing ...
	I1206 10:45:02.626611  399286 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem
	I1206 10:45:02.626634  399286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem (1679 bytes)
	I1206 10:45:02.626683  399286 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem org=jenkins.functional-196950 san=[127.0.0.1 192.168.49.2 functional-196950 localhost minikube]
	I1206 10:45:02.961448  399286 provision.go:177] copyRemoteCerts
	I1206 10:45:02.961531  399286 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:45:02.961575  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:02.978755  399286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:45:03.095893  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1206 10:45:03.095982  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 10:45:03.114611  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1206 10:45:03.114706  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 10:45:03.135133  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1206 10:45:03.135195  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 10:45:03.153562  399286 provision.go:87] duration metric: took 545.588133ms to configureAuth
	I1206 10:45:03.153601  399286 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:45:03.153843  399286 config.go:182] Loaded profile config "functional-196950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:45:03.153992  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:03.171946  399286 main.go:143] libmachine: Using SSH client type: native
	I1206 10:45:03.172256  399286 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1206 10:45:03.172279  399286 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 10:45:03.524489  399286 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 10:45:03.524512  399286 machine.go:97] duration metric: took 1.445242076s to provisionDockerMachine
	I1206 10:45:03.524523  399286 start.go:293] postStartSetup for "functional-196950" (driver="docker")
	I1206 10:45:03.524536  399286 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:45:03.524603  399286 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:45:03.524644  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:03.555449  399286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:45:03.668233  399286 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:45:03.672046  399286 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1206 10:45:03.672068  399286 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1206 10:45:03.672073  399286 command_runner.go:130] > VERSION_ID="12"
	I1206 10:45:03.672078  399286 command_runner.go:130] > VERSION="12 (bookworm)"
	I1206 10:45:03.672084  399286 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1206 10:45:03.672087  399286 command_runner.go:130] > ID=debian
	I1206 10:45:03.672092  399286 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1206 10:45:03.672114  399286 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1206 10:45:03.672130  399286 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1206 10:45:03.672206  399286 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:45:03.672228  399286 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:45:03.672240  399286 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-362985/.minikube/addons for local assets ...
	I1206 10:45:03.672300  399286 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-362985/.minikube/files for local assets ...
	I1206 10:45:03.672390  399286 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem -> 3648552.pem in /etc/ssl/certs
	I1206 10:45:03.672402  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem -> /etc/ssl/certs/3648552.pem
	I1206 10:45:03.672481  399286 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/test/nested/copy/364855/hosts -> hosts in /etc/test/nested/copy/364855
	I1206 10:45:03.672489  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/test/nested/copy/364855/hosts -> /etc/test/nested/copy/364855/hosts
	I1206 10:45:03.672536  399286 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/364855
	I1206 10:45:03.681376  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem --> /etc/ssl/certs/3648552.pem (1708 bytes)
	I1206 10:45:03.700845  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/test/nested/copy/364855/hosts --> /etc/test/nested/copy/364855/hosts (40 bytes)
	I1206 10:45:03.720695  399286 start.go:296] duration metric: took 196.153156ms for postStartSetup
	I1206 10:45:03.720782  399286 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:45:03.720851  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:03.739871  399286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:45:03.844136  399286 command_runner.go:130] > 11%
	I1206 10:45:03.844709  399286 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:45:03.849387  399286 command_runner.go:130] > 174G
	I1206 10:45:03.849978  399286 fix.go:56] duration metric: took 1.791620292s for fixHost
	I1206 10:45:03.850000  399286 start.go:83] releasing machines lock for "functional-196950", held for 1.791664797s
	I1206 10:45:03.850077  399286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-196950
	I1206 10:45:03.867785  399286 ssh_runner.go:195] Run: cat /version.json
	I1206 10:45:03.867838  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:03.868113  399286 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:45:03.868167  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:03.886546  399286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:45:03.911694  399286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:45:03.991370  399286 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764843390-22032", "minikube_version": "v1.37.0", "commit": "d7bfd7d6d80c3eeb1d6cf1c5f081f8642bc1997e"}
	I1206 10:45:03.991537  399286 ssh_runner.go:195] Run: systemctl --version
	I1206 10:45:04.088215  399286 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1206 10:45:04.091250  399286 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1206 10:45:04.091291  399286 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1206 10:45:04.091431  399286 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 10:45:04.130964  399286 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1206 10:45:04.136249  399286 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1206 10:45:04.136293  399286 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:45:04.136352  399286 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:45:04.145113  399286 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 10:45:04.145182  399286 start.go:496] detecting cgroup driver to use...
	I1206 10:45:04.145222  399286 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:45:04.145282  399286 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 10:45:04.161420  399286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 10:45:04.175205  399286 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:45:04.175315  399286 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:45:04.191496  399286 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:45:04.205243  399286 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:45:04.349911  399286 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:45:04.470887  399286 docker.go:234] disabling docker service ...
	I1206 10:45:04.471006  399286 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:45:04.486933  399286 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:45:04.500707  399286 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:45:04.632842  399286 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:45:04.756279  399286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:45:04.770461  399286 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:45:04.785365  399286 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1206 10:45:04.786482  399286 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 10:45:04.786596  399286 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:45:04.796852  399286 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 10:45:04.796980  399286 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:45:04.806654  399286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:45:04.816002  399286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:45:04.825576  399286 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:45:04.834547  399286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:45:04.844889  399286 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:45:04.854032  399286 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:45:04.863103  399286 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:45:04.870297  399286 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1206 10:45:04.871475  399286 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:45:04.879247  399286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:45:04.992959  399286 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 10:45:05.192927  399286 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 10:45:05.193085  399286 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 10:45:05.197937  399286 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1206 10:45:05.197964  399286 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1206 10:45:05.197971  399286 command_runner.go:130] > Device: 0,72	Inode: 1640        Links: 1
	I1206 10:45:05.197987  399286 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 10:45:05.198031  399286 command_runner.go:130] > Access: 2025-12-06 10:45:05.125759427 +0000
	I1206 10:45:05.198049  399286 command_runner.go:130] > Modify: 2025-12-06 10:45:05.125759427 +0000
	I1206 10:45:05.198060  399286 command_runner.go:130] > Change: 2025-12-06 10:45:05.125759427 +0000
	I1206 10:45:05.198063  399286 command_runner.go:130] >  Birth: -
	I1206 10:45:05.198081  399286 start.go:564] Will wait 60s for crictl version
	I1206 10:45:05.198158  399286 ssh_runner.go:195] Run: which crictl
	I1206 10:45:05.202333  399286 command_runner.go:130] > /usr/local/bin/crictl
	I1206 10:45:05.202451  399286 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:45:05.227773  399286 command_runner.go:130] > Version:  0.1.0
	I1206 10:45:05.227855  399286 command_runner.go:130] > RuntimeName:  cri-o
	I1206 10:45:05.227876  399286 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1206 10:45:05.227895  399286 command_runner.go:130] > RuntimeApiVersion:  v1
	I1206 10:45:05.230308  399286 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 10:45:05.230460  399286 ssh_runner.go:195] Run: crio --version
	I1206 10:45:05.261871  399286 command_runner.go:130] > crio version 1.34.3
	I1206 10:45:05.261971  399286 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1206 10:45:05.261992  399286 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1206 10:45:05.262014  399286 command_runner.go:130] >    GitTreeState:   dirty
	I1206 10:45:05.262045  399286 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1206 10:45:05.262062  399286 command_runner.go:130] >    GoVersion:      go1.24.6
	I1206 10:45:05.262083  399286 command_runner.go:130] >    Compiler:       gc
	I1206 10:45:05.262102  399286 command_runner.go:130] >    Platform:       linux/arm64
	I1206 10:45:05.262141  399286 command_runner.go:130] >    Linkmode:       static
	I1206 10:45:05.262176  399286 command_runner.go:130] >    BuildTags:
	I1206 10:45:05.262192  399286 command_runner.go:130] >      static
	I1206 10:45:05.262229  399286 command_runner.go:130] >      netgo
	I1206 10:45:05.262248  399286 command_runner.go:130] >      osusergo
	I1206 10:45:05.262264  399286 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1206 10:45:05.262283  399286 command_runner.go:130] >      seccomp
	I1206 10:45:05.262317  399286 command_runner.go:130] >      apparmor
	I1206 10:45:05.262335  399286 command_runner.go:130] >      selinux
	I1206 10:45:05.262352  399286 command_runner.go:130] >    LDFlags:          unknown
	I1206 10:45:05.262371  399286 command_runner.go:130] >    SeccompEnabled:   true
	I1206 10:45:05.262402  399286 command_runner.go:130] >    AppArmorEnabled:  false
	I1206 10:45:05.263735  399286 ssh_runner.go:195] Run: crio --version
	I1206 10:45:05.292275  399286 command_runner.go:130] > crio version 1.34.3
	I1206 10:45:05.292350  399286 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1206 10:45:05.292370  399286 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1206 10:45:05.292389  399286 command_runner.go:130] >    GitTreeState:   dirty
	I1206 10:45:05.292419  399286 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1206 10:45:05.292445  399286 command_runner.go:130] >    GoVersion:      go1.24.6
	I1206 10:45:05.292464  399286 command_runner.go:130] >    Compiler:       gc
	I1206 10:45:05.292484  399286 command_runner.go:130] >    Platform:       linux/arm64
	I1206 10:45:05.292510  399286 command_runner.go:130] >    Linkmode:       static
	I1206 10:45:05.292529  399286 command_runner.go:130] >    BuildTags:
	I1206 10:45:05.292548  399286 command_runner.go:130] >      static
	I1206 10:45:05.292577  399286 command_runner.go:130] >      netgo
	I1206 10:45:05.292594  399286 command_runner.go:130] >      osusergo
	I1206 10:45:05.292622  399286 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1206 10:45:05.292652  399286 command_runner.go:130] >      seccomp
	I1206 10:45:05.292669  399286 command_runner.go:130] >      apparmor
	I1206 10:45:05.292692  399286 command_runner.go:130] >      selinux
	I1206 10:45:05.292731  399286 command_runner.go:130] >    LDFlags:          unknown
	I1206 10:45:05.292749  399286 command_runner.go:130] >    SeccompEnabled:   true
	I1206 10:45:05.292767  399286 command_runner.go:130] >    AppArmorEnabled:  false
	I1206 10:45:05.300434  399286 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1206 10:45:05.303425  399286 cli_runner.go:164] Run: docker network inspect functional-196950 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:45:05.320718  399286 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 10:45:05.324954  399286 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1206 10:45:05.325142  399286 kubeadm.go:884] updating cluster {Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:45:05.325270  399286 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 10:45:05.325346  399286 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:45:05.356177  399286 command_runner.go:130] > {
	I1206 10:45:05.356195  399286 command_runner.go:130] >   "images":  [
	I1206 10:45:05.356199  399286 command_runner.go:130] >     {
	I1206 10:45:05.356208  399286 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1206 10:45:05.356213  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.356218  399286 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1206 10:45:05.356222  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356226  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.356235  399286 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1206 10:45:05.356243  399286 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1206 10:45:05.356246  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356251  399286 command_runner.go:130] >       "size":  "111333938",
	I1206 10:45:05.356254  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.356259  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.356262  399286 command_runner.go:130] >     },
	I1206 10:45:05.356265  399286 command_runner.go:130] >     {
	I1206 10:45:05.356272  399286 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1206 10:45:05.356285  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.356291  399286 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1206 10:45:05.356294  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356298  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.356307  399286 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1206 10:45:05.356315  399286 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1206 10:45:05.356318  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356322  399286 command_runner.go:130] >       "size":  "29037500",
	I1206 10:45:05.356326  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.356334  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.356337  399286 command_runner.go:130] >     },
	I1206 10:45:05.356340  399286 command_runner.go:130] >     {
	I1206 10:45:05.356346  399286 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1206 10:45:05.356350  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.356355  399286 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1206 10:45:05.356358  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356362  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.356369  399286 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1206 10:45:05.356377  399286 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1206 10:45:05.356380  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356385  399286 command_runner.go:130] >       "size":  "74491780",
	I1206 10:45:05.356389  399286 command_runner.go:130] >       "username":  "nonroot",
	I1206 10:45:05.356393  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.356396  399286 command_runner.go:130] >     },
	I1206 10:45:05.356399  399286 command_runner.go:130] >     {
	I1206 10:45:05.356405  399286 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1206 10:45:05.356409  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.356428  399286 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1206 10:45:05.356433  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356438  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.356446  399286 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1206 10:45:05.356453  399286 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1206 10:45:05.356457  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356465  399286 command_runner.go:130] >       "size":  "60857170",
	I1206 10:45:05.356469  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.356472  399286 command_runner.go:130] >         "value":  "0"
	I1206 10:45:05.356475  399286 command_runner.go:130] >       },
	I1206 10:45:05.356488  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.356492  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.356495  399286 command_runner.go:130] >     },
	I1206 10:45:05.356498  399286 command_runner.go:130] >     {
	I1206 10:45:05.356505  399286 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1206 10:45:05.356508  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.356513  399286 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1206 10:45:05.356516  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356520  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.356528  399286 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1206 10:45:05.356536  399286 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1206 10:45:05.356539  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356543  399286 command_runner.go:130] >       "size":  "84949999",
	I1206 10:45:05.356546  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.356550  399286 command_runner.go:130] >         "value":  "0"
	I1206 10:45:05.356553  399286 command_runner.go:130] >       },
	I1206 10:45:05.356557  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.356561  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.356564  399286 command_runner.go:130] >     },
	I1206 10:45:05.356567  399286 command_runner.go:130] >     {
	I1206 10:45:05.356573  399286 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1206 10:45:05.356577  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.356583  399286 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1206 10:45:05.356586  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356590  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.356598  399286 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1206 10:45:05.356606  399286 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1206 10:45:05.356609  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356617  399286 command_runner.go:130] >       "size":  "72170325",
	I1206 10:45:05.356623  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.356627  399286 command_runner.go:130] >         "value":  "0"
	I1206 10:45:05.356631  399286 command_runner.go:130] >       },
	I1206 10:45:05.356634  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.356638  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.356641  399286 command_runner.go:130] >     },
	I1206 10:45:05.356643  399286 command_runner.go:130] >     {
	I1206 10:45:05.356650  399286 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1206 10:45:05.356654  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.356659  399286 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1206 10:45:05.356662  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356666  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.356674  399286 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1206 10:45:05.356681  399286 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1206 10:45:05.356684  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356688  399286 command_runner.go:130] >       "size":  "74106775",
	I1206 10:45:05.356692  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.356695  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.356698  399286 command_runner.go:130] >     },
	I1206 10:45:05.356701  399286 command_runner.go:130] >     {
	I1206 10:45:05.356708  399286 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1206 10:45:05.356711  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.356716  399286 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1206 10:45:05.356719  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356723  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.356730  399286 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1206 10:45:05.356747  399286 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1206 10:45:05.356751  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356755  399286 command_runner.go:130] >       "size":  "49822549",
	I1206 10:45:05.356759  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.356763  399286 command_runner.go:130] >         "value":  "0"
	I1206 10:45:05.356766  399286 command_runner.go:130] >       },
	I1206 10:45:05.356770  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.356778  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.356781  399286 command_runner.go:130] >     },
	I1206 10:45:05.356784  399286 command_runner.go:130] >     {
	I1206 10:45:05.356790  399286 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1206 10:45:05.356794  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.356798  399286 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1206 10:45:05.356801  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356805  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.356812  399286 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1206 10:45:05.356820  399286 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1206 10:45:05.356823  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356826  399286 command_runner.go:130] >       "size":  "519884",
	I1206 10:45:05.356830  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.356833  399286 command_runner.go:130] >         "value":  "65535"
	I1206 10:45:05.356836  399286 command_runner.go:130] >       },
	I1206 10:45:05.356840  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.356843  399286 command_runner.go:130] >       "pinned":  true
	I1206 10:45:05.356850  399286 command_runner.go:130] >     }
	I1206 10:45:05.356853  399286 command_runner.go:130] >   ]
	I1206 10:45:05.356857  399286 command_runner.go:130] > }
	I1206 10:45:05.358491  399286 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:45:05.358523  399286 crio.go:433] Images already preloaded, skipping extraction
	I1206 10:45:05.358585  399286 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:45:05.381820  399286 command_runner.go:130] > {
	I1206 10:45:05.381840  399286 command_runner.go:130] >   "images":  [
	I1206 10:45:05.381844  399286 command_runner.go:130] >     {
	I1206 10:45:05.381853  399286 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1206 10:45:05.381857  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.381864  399286 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1206 10:45:05.381867  399286 command_runner.go:130] >       ],
	I1206 10:45:05.381871  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.381880  399286 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1206 10:45:05.381888  399286 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1206 10:45:05.381892  399286 command_runner.go:130] >       ],
	I1206 10:45:05.381896  399286 command_runner.go:130] >       "size":  "111333938",
	I1206 10:45:05.381900  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.381909  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.381912  399286 command_runner.go:130] >     },
	I1206 10:45:05.381916  399286 command_runner.go:130] >     {
	I1206 10:45:05.381922  399286 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1206 10:45:05.381926  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.381932  399286 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1206 10:45:05.381935  399286 command_runner.go:130] >       ],
	I1206 10:45:05.381939  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.381947  399286 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1206 10:45:05.381956  399286 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1206 10:45:05.381959  399286 command_runner.go:130] >       ],
	I1206 10:45:05.381963  399286 command_runner.go:130] >       "size":  "29037500",
	I1206 10:45:05.381967  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.381973  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.381977  399286 command_runner.go:130] >     },
	I1206 10:45:05.381980  399286 command_runner.go:130] >     {
	I1206 10:45:05.381987  399286 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1206 10:45:05.381990  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.381999  399286 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1206 10:45:05.382003  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382007  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.382014  399286 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1206 10:45:05.382022  399286 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1206 10:45:05.382025  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382029  399286 command_runner.go:130] >       "size":  "74491780",
	I1206 10:45:05.382033  399286 command_runner.go:130] >       "username":  "nonroot",
	I1206 10:45:05.382037  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.382040  399286 command_runner.go:130] >     },
	I1206 10:45:05.382043  399286 command_runner.go:130] >     {
	I1206 10:45:05.382049  399286 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1206 10:45:05.382053  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.382058  399286 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1206 10:45:05.382063  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382067  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.382074  399286 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1206 10:45:05.382082  399286 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1206 10:45:05.382085  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382089  399286 command_runner.go:130] >       "size":  "60857170",
	I1206 10:45:05.382093  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.382096  399286 command_runner.go:130] >         "value":  "0"
	I1206 10:45:05.382100  399286 command_runner.go:130] >       },
	I1206 10:45:05.382398  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.382411  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.382415  399286 command_runner.go:130] >     },
	I1206 10:45:05.382419  399286 command_runner.go:130] >     {
	I1206 10:45:05.382427  399286 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1206 10:45:05.382437  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.382443  399286 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1206 10:45:05.382446  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382450  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.382463  399286 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1206 10:45:05.382476  399286 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1206 10:45:05.382479  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382484  399286 command_runner.go:130] >       "size":  "84949999",
	I1206 10:45:05.382492  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.382495  399286 command_runner.go:130] >         "value":  "0"
	I1206 10:45:05.382499  399286 command_runner.go:130] >       },
	I1206 10:45:05.382503  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.382507  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.382510  399286 command_runner.go:130] >     },
	I1206 10:45:05.382514  399286 command_runner.go:130] >     {
	I1206 10:45:05.382524  399286 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1206 10:45:05.382528  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.382534  399286 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1206 10:45:05.382541  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382546  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.382555  399286 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1206 10:45:05.382568  399286 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1206 10:45:05.382571  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382575  399286 command_runner.go:130] >       "size":  "72170325",
	I1206 10:45:05.382579  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.382583  399286 command_runner.go:130] >         "value":  "0"
	I1206 10:45:05.382590  399286 command_runner.go:130] >       },
	I1206 10:45:05.382594  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.382597  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.382601  399286 command_runner.go:130] >     },
	I1206 10:45:05.382604  399286 command_runner.go:130] >     {
	I1206 10:45:05.382615  399286 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1206 10:45:05.382618  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.382624  399286 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1206 10:45:05.382627  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382631  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.382643  399286 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1206 10:45:05.382651  399286 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1206 10:45:05.382658  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382666  399286 command_runner.go:130] >       "size":  "74106775",
	I1206 10:45:05.382672  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.382676  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.382679  399286 command_runner.go:130] >     },
	I1206 10:45:05.382682  399286 command_runner.go:130] >     {
	I1206 10:45:05.382693  399286 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1206 10:45:05.382697  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.382702  399286 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1206 10:45:05.382706  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382710  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.382722  399286 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1206 10:45:05.382745  399286 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1206 10:45:05.382753  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382757  399286 command_runner.go:130] >       "size":  "49822549",
	I1206 10:45:05.382761  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.382765  399286 command_runner.go:130] >         "value":  "0"
	I1206 10:45:05.382768  399286 command_runner.go:130] >       },
	I1206 10:45:05.382772  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.382780  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.382783  399286 command_runner.go:130] >     },
	I1206 10:45:05.382786  399286 command_runner.go:130] >     {
	I1206 10:45:05.382793  399286 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1206 10:45:05.382797  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.382805  399286 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1206 10:45:05.382808  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382812  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.382820  399286 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1206 10:45:05.382832  399286 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1206 10:45:05.382835  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382839  399286 command_runner.go:130] >       "size":  "519884",
	I1206 10:45:05.382843  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.382847  399286 command_runner.go:130] >         "value":  "65535"
	I1206 10:45:05.382857  399286 command_runner.go:130] >       },
	I1206 10:45:05.382861  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.382865  399286 command_runner.go:130] >       "pinned":  true
	I1206 10:45:05.382868  399286 command_runner.go:130] >     }
	I1206 10:45:05.382871  399286 command_runner.go:130] >   ]
	I1206 10:45:05.382874  399286 command_runner.go:130] > }
	I1206 10:45:05.396183  399286 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:45:05.396208  399286 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:45:05.396219  399286 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1206 10:45:05.396325  399286 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-196950 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:45:05.396421  399286 ssh_runner.go:195] Run: crio config
	I1206 10:45:05.425462  399286 command_runner.go:130] ! time="2025-12-06T10:45:05.425119459Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1206 10:45:05.425532  399286 command_runner.go:130] ! time="2025-12-06T10:45:05.425157991Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1206 10:45:05.425754  399286 command_runner.go:130] ! time="2025-12-06T10:45:05.425195308Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1206 10:45:05.425797  399286 command_runner.go:130] ! time="2025-12-06T10:45:05.42522017Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1206 10:45:05.425982  399286 command_runner.go:130] ! time="2025-12-06T10:45:05.425299687Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:45:05.426160  399286 command_runner.go:130] ! time="2025-12-06T10:45:05.42561672Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1206 10:45:05.442529  399286 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1206 10:45:05.470811  399286 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1206 10:45:05.470887  399286 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1206 10:45:05.470910  399286 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1206 10:45:05.470925  399286 command_runner.go:130] > #
	I1206 10:45:05.470961  399286 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1206 10:45:05.470990  399286 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1206 10:45:05.471012  399286 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1206 10:45:05.471037  399286 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1206 10:45:05.471066  399286 command_runner.go:130] > # reload'.
	I1206 10:45:05.471089  399286 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1206 10:45:05.471110  399286 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1206 10:45:05.471132  399286 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1206 10:45:05.471165  399286 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1206 10:45:05.471189  399286 command_runner.go:130] > [crio]
	I1206 10:45:05.471211  399286 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1206 10:45:05.471233  399286 command_runner.go:130] > # containers images, in this directory.
	I1206 10:45:05.471266  399286 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1206 10:45:05.471291  399286 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1206 10:45:05.471336  399286 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1206 10:45:05.471369  399286 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1206 10:45:05.471416  399286 command_runner.go:130] > # imagestore = ""
	I1206 10:45:05.471447  399286 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1206 10:45:05.471467  399286 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1206 10:45:05.471498  399286 command_runner.go:130] > # storage_driver = "overlay"
	I1206 10:45:05.471527  399286 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1206 10:45:05.471540  399286 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1206 10:45:05.471544  399286 command_runner.go:130] > # storage_option = [
	I1206 10:45:05.471548  399286 command_runner.go:130] > # ]
	I1206 10:45:05.471554  399286 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1206 10:45:05.471561  399286 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1206 10:45:05.471566  399286 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1206 10:45:05.471572  399286 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1206 10:45:05.471584  399286 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1206 10:45:05.471601  399286 command_runner.go:130] > # always happen on a node reboot
	I1206 10:45:05.471614  399286 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1206 10:45:05.471624  399286 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1206 10:45:05.471631  399286 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1206 10:45:05.471647  399286 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1206 10:45:05.471665  399286 command_runner.go:130] > # version_file_persist = ""
	I1206 10:45:05.471674  399286 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1206 10:45:05.471685  399286 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1206 10:45:05.471689  399286 command_runner.go:130] > # internal_wipe = true
	I1206 10:45:05.471701  399286 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1206 10:45:05.471736  399286 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1206 10:45:05.471747  399286 command_runner.go:130] > # internal_repair = true
	I1206 10:45:05.471753  399286 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1206 10:45:05.471760  399286 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1206 10:45:05.471768  399286 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1206 10:45:05.471774  399286 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1206 10:45:05.471790  399286 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1206 10:45:05.471793  399286 command_runner.go:130] > [crio.api]
	I1206 10:45:05.471799  399286 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1206 10:45:05.471810  399286 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1206 10:45:05.471817  399286 command_runner.go:130] > # IP address on which the stream server will listen.
	I1206 10:45:05.471822  399286 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1206 10:45:05.471829  399286 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1206 10:45:05.471837  399286 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1206 10:45:05.471841  399286 command_runner.go:130] > # stream_port = "0"
	I1206 10:45:05.471852  399286 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1206 10:45:05.471856  399286 command_runner.go:130] > # stream_enable_tls = false
	I1206 10:45:05.471867  399286 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1206 10:45:05.471871  399286 command_runner.go:130] > # stream_idle_timeout = ""
	I1206 10:45:05.471891  399286 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1206 10:45:05.471897  399286 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1206 10:45:05.471905  399286 command_runner.go:130] > # stream_tls_cert = ""
	I1206 10:45:05.471912  399286 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1206 10:45:05.471918  399286 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1206 10:45:05.471922  399286 command_runner.go:130] > # stream_tls_key = ""
	I1206 10:45:05.471928  399286 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1206 10:45:05.471937  399286 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1206 10:45:05.471942  399286 command_runner.go:130] > # automatically pick up the changes.
	I1206 10:45:05.471950  399286 command_runner.go:130] > # stream_tls_ca = ""
	I1206 10:45:05.471981  399286 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1206 10:45:05.471991  399286 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1206 10:45:05.471999  399286 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1206 10:45:05.472004  399286 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1206 10:45:05.472010  399286 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1206 10:45:05.472018  399286 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1206 10:45:05.472022  399286 command_runner.go:130] > [crio.runtime]
	I1206 10:45:05.472029  399286 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1206 10:45:05.472036  399286 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1206 10:45:05.472041  399286 command_runner.go:130] > # "nofile=1024:2048"
	I1206 10:45:05.472057  399286 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1206 10:45:05.472061  399286 command_runner.go:130] > # default_ulimits = [
	I1206 10:45:05.472064  399286 command_runner.go:130] > # ]
	I1206 10:45:05.472070  399286 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1206 10:45:05.472077  399286 command_runner.go:130] > # no_pivot = false
	I1206 10:45:05.472083  399286 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1206 10:45:05.472090  399286 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1206 10:45:05.472095  399286 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1206 10:45:05.472103  399286 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1206 10:45:05.472108  399286 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1206 10:45:05.472117  399286 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1206 10:45:05.472123  399286 command_runner.go:130] > # conmon = ""
	I1206 10:45:05.472127  399286 command_runner.go:130] > # Cgroup setting for conmon
	I1206 10:45:05.472137  399286 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1206 10:45:05.472143  399286 command_runner.go:130] > conmon_cgroup = "pod"
	I1206 10:45:05.472152  399286 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1206 10:45:05.472157  399286 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1206 10:45:05.472164  399286 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1206 10:45:05.472168  399286 command_runner.go:130] > # conmon_env = [
	I1206 10:45:05.472173  399286 command_runner.go:130] > # ]
	I1206 10:45:05.472180  399286 command_runner.go:130] > # Additional environment variables to set for all the
	I1206 10:45:05.472188  399286 command_runner.go:130] > # containers. These are overridden if set in the
	I1206 10:45:05.472198  399286 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1206 10:45:05.472204  399286 command_runner.go:130] > # default_env = [
	I1206 10:45:05.472208  399286 command_runner.go:130] > # ]
	I1206 10:45:05.472213  399286 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1206 10:45:05.472223  399286 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1206 10:45:05.472229  399286 command_runner.go:130] > # selinux = false
	I1206 10:45:05.472236  399286 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1206 10:45:05.472246  399286 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1206 10:45:05.472252  399286 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:45:05.472255  399286 command_runner.go:130] > # seccomp_profile = ""
	I1206 10:45:05.472262  399286 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1206 10:45:05.472270  399286 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:45:05.472274  399286 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1206 10:45:05.472281  399286 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1206 10:45:05.472287  399286 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1206 10:45:05.472295  399286 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1206 10:45:05.472302  399286 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1206 10:45:05.472315  399286 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:45:05.472320  399286 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1206 10:45:05.472326  399286 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1206 10:45:05.472330  399286 command_runner.go:130] > # the cgroup blockio controller.
	I1206 10:45:05.472337  399286 command_runner.go:130] > # blockio_config_file = ""
	I1206 10:45:05.472345  399286 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1206 10:45:05.472353  399286 command_runner.go:130] > # blockio parameters.
	I1206 10:45:05.472357  399286 command_runner.go:130] > # blockio_reload = false
	I1206 10:45:05.472364  399286 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1206 10:45:05.472367  399286 command_runner.go:130] > # irqbalance daemon.
	I1206 10:45:05.472373  399286 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1206 10:45:05.472381  399286 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1206 10:45:05.472391  399286 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1206 10:45:05.472412  399286 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1206 10:45:05.472419  399286 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1206 10:45:05.472428  399286 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1206 10:45:05.472437  399286 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:45:05.472448  399286 command_runner.go:130] > # rdt_config_file = ""
	I1206 10:45:05.472455  399286 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1206 10:45:05.472459  399286 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1206 10:45:05.472465  399286 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1206 10:45:05.472472  399286 command_runner.go:130] > # separate_pull_cgroup = ""
	I1206 10:45:05.472479  399286 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1206 10:45:05.472486  399286 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1206 10:45:05.472498  399286 command_runner.go:130] > # will be added.
	I1206 10:45:05.472503  399286 command_runner.go:130] > # default_capabilities = [
	I1206 10:45:05.472506  399286 command_runner.go:130] > # 	"CHOWN",
	I1206 10:45:05.472510  399286 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1206 10:45:05.472520  399286 command_runner.go:130] > # 	"FSETID",
	I1206 10:45:05.472525  399286 command_runner.go:130] > # 	"FOWNER",
	I1206 10:45:05.472529  399286 command_runner.go:130] > # 	"SETGID",
	I1206 10:45:05.472539  399286 command_runner.go:130] > # 	"SETUID",
	I1206 10:45:05.472558  399286 command_runner.go:130] > # 	"SETPCAP",
	I1206 10:45:05.472573  399286 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1206 10:45:05.472576  399286 command_runner.go:130] > # 	"KILL",
	I1206 10:45:05.472579  399286 command_runner.go:130] > # ]
	I1206 10:45:05.472587  399286 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1206 10:45:05.472602  399286 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1206 10:45:05.472607  399286 command_runner.go:130] > # add_inheritable_capabilities = false
	I1206 10:45:05.472616  399286 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1206 10:45:05.472628  399286 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1206 10:45:05.472632  399286 command_runner.go:130] > default_sysctls = [
	I1206 10:45:05.472637  399286 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1206 10:45:05.472643  399286 command_runner.go:130] > ]
	I1206 10:45:05.472650  399286 command_runner.go:130] > # List of devices on the host that a
	I1206 10:45:05.472660  399286 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1206 10:45:05.472664  399286 command_runner.go:130] > # allowed_devices = [
	I1206 10:45:05.472670  399286 command_runner.go:130] > # 	"/dev/fuse",
	I1206 10:45:05.472674  399286 command_runner.go:130] > # 	"/dev/net/tun",
	I1206 10:45:05.472681  399286 command_runner.go:130] > # ]
	I1206 10:45:05.472689  399286 command_runner.go:130] > # List of additional devices. specified as
	I1206 10:45:05.472697  399286 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1206 10:45:05.472703  399286 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1206 10:45:05.472711  399286 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1206 10:45:05.472716  399286 command_runner.go:130] > # additional_devices = [
	I1206 10:45:05.472722  399286 command_runner.go:130] > # ]
	I1206 10:45:05.472730  399286 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1206 10:45:05.472737  399286 command_runner.go:130] > # cdi_spec_dirs = [
	I1206 10:45:05.472743  399286 command_runner.go:130] > # 	"/etc/cdi",
	I1206 10:45:05.472747  399286 command_runner.go:130] > # 	"/var/run/cdi",
	I1206 10:45:05.472750  399286 command_runner.go:130] > # ]
	I1206 10:45:05.472757  399286 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1206 10:45:05.472766  399286 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1206 10:45:05.472770  399286 command_runner.go:130] > # Defaults to false.
	I1206 10:45:05.472775  399286 command_runner.go:130] > # device_ownership_from_security_context = false
	I1206 10:45:05.472782  399286 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1206 10:45:05.472791  399286 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1206 10:45:05.472795  399286 command_runner.go:130] > # hooks_dir = [
	I1206 10:45:05.472800  399286 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1206 10:45:05.472806  399286 command_runner.go:130] > # ]
	I1206 10:45:05.472813  399286 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1206 10:45:05.472819  399286 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1206 10:45:05.472827  399286 command_runner.go:130] > # its default mounts from the following two files:
	I1206 10:45:05.472830  399286 command_runner.go:130] > #
	I1206 10:45:05.472836  399286 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1206 10:45:05.472845  399286 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1206 10:45:05.472852  399286 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1206 10:45:05.472858  399286 command_runner.go:130] > #
	I1206 10:45:05.472865  399286 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1206 10:45:05.472871  399286 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1206 10:45:05.472878  399286 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1206 10:45:05.472887  399286 command_runner.go:130] > #      only add mounts it finds in this file.
	I1206 10:45:05.472896  399286 command_runner.go:130] > #
	I1206 10:45:05.472902  399286 command_runner.go:130] > # default_mounts_file = ""
	I1206 10:45:05.472910  399286 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1206 10:45:05.472919  399286 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1206 10:45:05.472932  399286 command_runner.go:130] > # pids_limit = -1
	I1206 10:45:05.472938  399286 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1206 10:45:05.472947  399286 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1206 10:45:05.472961  399286 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1206 10:45:05.472979  399286 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1206 10:45:05.472983  399286 command_runner.go:130] > # log_size_max = -1
	I1206 10:45:05.472990  399286 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1206 10:45:05.472997  399286 command_runner.go:130] > # log_to_journald = false
	I1206 10:45:05.473006  399286 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1206 10:45:05.473011  399286 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1206 10:45:05.473016  399286 command_runner.go:130] > # Path to directory for container attach sockets.
	I1206 10:45:05.473024  399286 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1206 10:45:05.473032  399286 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1206 10:45:05.473036  399286 command_runner.go:130] > # bind_mount_prefix = ""
	I1206 10:45:05.473044  399286 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1206 10:45:05.473049  399286 command_runner.go:130] > # read_only = false
	I1206 10:45:05.473063  399286 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1206 10:45:05.473070  399286 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1206 10:45:05.473074  399286 command_runner.go:130] > # live configuration reload.
	I1206 10:45:05.473085  399286 command_runner.go:130] > # log_level = "info"
	I1206 10:45:05.473092  399286 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1206 10:45:05.473097  399286 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:45:05.473101  399286 command_runner.go:130] > # log_filter = ""
	I1206 10:45:05.473110  399286 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1206 10:45:05.473119  399286 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1206 10:45:05.473123  399286 command_runner.go:130] > # separated by comma.
	I1206 10:45:05.473132  399286 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1206 10:45:05.473138  399286 command_runner.go:130] > # uid_mappings = ""
	I1206 10:45:05.473145  399286 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1206 10:45:05.473155  399286 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1206 10:45:05.473162  399286 command_runner.go:130] > # separated by comma.
	I1206 10:45:05.473171  399286 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1206 10:45:05.473178  399286 command_runner.go:130] > # gid_mappings = ""
	I1206 10:45:05.473185  399286 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1206 10:45:05.473197  399286 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1206 10:45:05.473206  399286 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1206 10:45:05.473217  399286 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1206 10:45:05.473223  399286 command_runner.go:130] > # minimum_mappable_uid = -1
	I1206 10:45:05.473230  399286 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1206 10:45:05.473238  399286 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1206 10:45:05.473249  399286 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1206 10:45:05.473260  399286 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1206 10:45:05.473264  399286 command_runner.go:130] > # minimum_mappable_gid = -1
	I1206 10:45:05.473270  399286 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1206 10:45:05.473282  399286 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1206 10:45:05.473287  399286 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1206 10:45:05.473292  399286 command_runner.go:130] > # ctr_stop_timeout = 30
	I1206 10:45:05.473298  399286 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1206 10:45:05.473307  399286 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1206 10:45:05.473312  399286 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1206 10:45:05.473317  399286 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1206 10:45:05.473323  399286 command_runner.go:130] > # drop_infra_ctr = true
	I1206 10:45:05.473330  399286 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1206 10:45:05.473339  399286 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1206 10:45:05.473347  399286 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1206 10:45:05.473351  399286 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1206 10:45:05.473362  399286 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1206 10:45:05.473373  399286 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1206 10:45:05.473378  399286 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1206 10:45:05.473383  399286 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1206 10:45:05.473389  399286 command_runner.go:130] > # shared_cpuset = ""
	I1206 10:45:05.473397  399286 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1206 10:45:05.473408  399286 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1206 10:45:05.473415  399286 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1206 10:45:05.473423  399286 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1206 10:45:05.473429  399286 command_runner.go:130] > # pinns_path = ""
	I1206 10:45:05.473435  399286 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1206 10:45:05.473442  399286 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1206 10:45:05.473446  399286 command_runner.go:130] > # enable_criu_support = true
	I1206 10:45:05.473458  399286 command_runner.go:130] > # Enable/disable the generation of the container,
	I1206 10:45:05.473465  399286 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1206 10:45:05.473469  399286 command_runner.go:130] > # enable_pod_events = false
	I1206 10:45:05.473476  399286 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1206 10:45:05.473483  399286 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1206 10:45:05.473487  399286 command_runner.go:130] > # default_runtime = "crun"
	I1206 10:45:05.473492  399286 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1206 10:45:05.473502  399286 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1206 10:45:05.473513  399286 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1206 10:45:05.473521  399286 command_runner.go:130] > # creation as a file is not desired either.
	I1206 10:45:05.473531  399286 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1206 10:45:05.473540  399286 command_runner.go:130] > # the hostname is being managed dynamically.
	I1206 10:45:05.473551  399286 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1206 10:45:05.473554  399286 command_runner.go:130] > # ]
	I1206 10:45:05.473561  399286 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1206 10:45:05.473567  399286 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1206 10:45:05.473576  399286 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1206 10:45:05.473582  399286 command_runner.go:130] > # Each entry in the table should follow the format:
	I1206 10:45:05.473596  399286 command_runner.go:130] > #
	I1206 10:45:05.473602  399286 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1206 10:45:05.473606  399286 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1206 10:45:05.473610  399286 command_runner.go:130] > # runtime_type = "oci"
	I1206 10:45:05.473616  399286 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1206 10:45:05.473623  399286 command_runner.go:130] > # inherit_default_runtime = false
	I1206 10:45:05.473628  399286 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1206 10:45:05.473632  399286 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1206 10:45:05.473646  399286 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1206 10:45:05.473650  399286 command_runner.go:130] > # monitor_env = []
	I1206 10:45:05.473654  399286 command_runner.go:130] > # privileged_without_host_devices = false
	I1206 10:45:05.473659  399286 command_runner.go:130] > # allowed_annotations = []
	I1206 10:45:05.473667  399286 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1206 10:45:05.473673  399286 command_runner.go:130] > # no_sync_log = false
	I1206 10:45:05.473677  399286 command_runner.go:130] > # default_annotations = {}
	I1206 10:45:05.473682  399286 command_runner.go:130] > # stream_websockets = false
	I1206 10:45:05.473689  399286 command_runner.go:130] > # seccomp_profile = ""
	I1206 10:45:05.473708  399286 command_runner.go:130] > # Where:
	I1206 10:45:05.473717  399286 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1206 10:45:05.473724  399286 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1206 10:45:05.473730  399286 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1206 10:45:05.473739  399286 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1206 10:45:05.473743  399286 command_runner.go:130] > #   in $PATH.
	I1206 10:45:05.473749  399286 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1206 10:45:05.473754  399286 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1206 10:45:05.473763  399286 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1206 10:45:05.473768  399286 command_runner.go:130] > #   state.
	I1206 10:45:05.473775  399286 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1206 10:45:05.473789  399286 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1206 10:45:05.473796  399286 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1206 10:45:05.473802  399286 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1206 10:45:05.473810  399286 command_runner.go:130] > #   the values from the default runtime on load time.
	I1206 10:45:05.473816  399286 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1206 10:45:05.473824  399286 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1206 10:45:05.473834  399286 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1206 10:45:05.473841  399286 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1206 10:45:05.473846  399286 command_runner.go:130] > #   The currently recognized values are:
	I1206 10:45:05.473852  399286 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1206 10:45:05.473862  399286 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1206 10:45:05.473868  399286 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1206 10:45:05.473876  399286 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1206 10:45:05.473890  399286 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1206 10:45:05.473900  399286 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1206 10:45:05.473907  399286 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1206 10:45:05.473914  399286 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1206 10:45:05.473924  399286 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1206 10:45:05.473930  399286 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1206 10:45:05.473938  399286 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1206 10:45:05.473946  399286 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1206 10:45:05.473955  399286 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1206 10:45:05.473961  399286 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1206 10:45:05.473970  399286 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1206 10:45:05.473978  399286 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1206 10:45:05.473988  399286 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1206 10:45:05.473992  399286 command_runner.go:130] > #   deprecated option "conmon".
	I1206 10:45:05.474000  399286 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1206 10:45:05.474008  399286 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1206 10:45:05.474015  399286 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1206 10:45:05.474020  399286 command_runner.go:130] > #   should be moved to the container's cgroup
	I1206 10:45:05.474027  399286 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1206 10:45:05.474034  399286 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1206 10:45:05.474042  399286 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1206 10:45:05.474048  399286 command_runner.go:130] > #   conmon-rs by using:
	I1206 10:45:05.474057  399286 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1206 10:45:05.474070  399286 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1206 10:45:05.474077  399286 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1206 10:45:05.474091  399286 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1206 10:45:05.474096  399286 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1206 10:45:05.474106  399286 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1206 10:45:05.474114  399286 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1206 10:45:05.474122  399286 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1206 10:45:05.474130  399286 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1206 10:45:05.474143  399286 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1206 10:45:05.474148  399286 command_runner.go:130] > #   when a machine crash happens.
	I1206 10:45:05.474159  399286 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1206 10:45:05.474172  399286 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1206 10:45:05.474181  399286 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1206 10:45:05.474188  399286 command_runner.go:130] > #   seccomp profile for the runtime.
	I1206 10:45:05.474212  399286 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1206 10:45:05.474223  399286 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1206 10:45:05.474227  399286 command_runner.go:130] > #
	I1206 10:45:05.474233  399286 command_runner.go:130] > # Using the seccomp notifier feature:
	I1206 10:45:05.474236  399286 command_runner.go:130] > #
	I1206 10:45:05.474244  399286 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1206 10:45:05.474254  399286 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1206 10:45:05.474257  399286 command_runner.go:130] > #
	I1206 10:45:05.474264  399286 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1206 10:45:05.474273  399286 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1206 10:45:05.474276  399286 command_runner.go:130] > #
	I1206 10:45:05.474283  399286 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1206 10:45:05.474286  399286 command_runner.go:130] > # feature.
	I1206 10:45:05.474289  399286 command_runner.go:130] > #
	I1206 10:45:05.474299  399286 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1206 10:45:05.474307  399286 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1206 10:45:05.474314  399286 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1206 10:45:05.474322  399286 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1206 10:45:05.474329  399286 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1206 10:45:05.474336  399286 command_runner.go:130] > #
	I1206 10:45:05.474344  399286 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1206 10:45:05.474350  399286 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1206 10:45:05.474354  399286 command_runner.go:130] > #
	I1206 10:45:05.474361  399286 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1206 10:45:05.474371  399286 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1206 10:45:05.474374  399286 command_runner.go:130] > #
	I1206 10:45:05.474380  399286 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1206 10:45:05.474386  399286 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1206 10:45:05.474392  399286 command_runner.go:130] > # limitation.
	I1206 10:45:05.474401  399286 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1206 10:45:05.474409  399286 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1206 10:45:05.474413  399286 command_runner.go:130] > runtime_type = ""
	I1206 10:45:05.474417  399286 command_runner.go:130] > runtime_root = "/run/crun"
	I1206 10:45:05.474422  399286 command_runner.go:130] > inherit_default_runtime = false
	I1206 10:45:05.474426  399286 command_runner.go:130] > runtime_config_path = ""
	I1206 10:45:05.474432  399286 command_runner.go:130] > container_min_memory = ""
	I1206 10:45:05.474437  399286 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1206 10:45:05.474442  399286 command_runner.go:130] > monitor_cgroup = "pod"
	I1206 10:45:05.474448  399286 command_runner.go:130] > monitor_exec_cgroup = ""
	I1206 10:45:05.474453  399286 command_runner.go:130] > allowed_annotations = [
	I1206 10:45:05.474461  399286 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1206 10:45:05.474464  399286 command_runner.go:130] > ]
	I1206 10:45:05.474469  399286 command_runner.go:130] > privileged_without_host_devices = false
	I1206 10:45:05.474473  399286 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1206 10:45:05.474478  399286 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1206 10:45:05.474484  399286 command_runner.go:130] > runtime_type = ""
	I1206 10:45:05.474489  399286 command_runner.go:130] > runtime_root = "/run/runc"
	I1206 10:45:05.474496  399286 command_runner.go:130] > inherit_default_runtime = false
	I1206 10:45:05.474501  399286 command_runner.go:130] > runtime_config_path = ""
	I1206 10:45:05.474506  399286 command_runner.go:130] > container_min_memory = ""
	I1206 10:45:05.474513  399286 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1206 10:45:05.474518  399286 command_runner.go:130] > monitor_cgroup = "pod"
	I1206 10:45:05.474522  399286 command_runner.go:130] > monitor_exec_cgroup = ""
	I1206 10:45:05.474530  399286 command_runner.go:130] > privileged_without_host_devices = false
	I1206 10:45:05.474540  399286 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1206 10:45:05.474548  399286 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1206 10:45:05.474556  399286 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1206 10:45:05.474564  399286 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1206 10:45:05.474575  399286 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1206 10:45:05.474592  399286 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1206 10:45:05.474602  399286 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1206 10:45:05.474610  399286 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1206 10:45:05.474622  399286 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1206 10:45:05.474635  399286 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1206 10:45:05.474642  399286 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1206 10:45:05.474652  399286 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1206 10:45:05.474656  399286 command_runner.go:130] > # Example:
	I1206 10:45:05.474664  399286 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1206 10:45:05.474672  399286 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1206 10:45:05.474677  399286 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1206 10:45:05.474686  399286 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1206 10:45:05.474691  399286 command_runner.go:130] > # cpuset = "0-1"
	I1206 10:45:05.474703  399286 command_runner.go:130] > # cpushares = "5"
	I1206 10:45:05.474708  399286 command_runner.go:130] > # cpuquota = "1000"
	I1206 10:45:05.474712  399286 command_runner.go:130] > # cpuperiod = "100000"
	I1206 10:45:05.474716  399286 command_runner.go:130] > # cpulimit = "35"
	I1206 10:45:05.474720  399286 command_runner.go:130] > # Where:
	I1206 10:45:05.474724  399286 command_runner.go:130] > # The workload name is workload-type.
	I1206 10:45:05.474738  399286 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1206 10:45:05.474744  399286 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1206 10:45:05.474749  399286 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1206 10:45:05.474761  399286 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1206 10:45:05.474777  399286 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1206 10:45:05.474783  399286 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1206 10:45:05.474790  399286 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1206 10:45:05.474797  399286 command_runner.go:130] > # Default value is set to true
	I1206 10:45:05.474803  399286 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1206 10:45:05.474809  399286 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1206 10:45:05.474821  399286 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1206 10:45:05.474826  399286 command_runner.go:130] > # Default value is set to 'false'
	I1206 10:45:05.474830  399286 command_runner.go:130] > # disable_hostport_mapping = false
	I1206 10:45:05.474836  399286 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1206 10:45:05.474847  399286 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1206 10:45:05.474853  399286 command_runner.go:130] > # timezone = ""
	I1206 10:45:05.474860  399286 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1206 10:45:05.474866  399286 command_runner.go:130] > #
	I1206 10:45:05.474874  399286 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1206 10:45:05.474883  399286 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1206 10:45:05.474889  399286 command_runner.go:130] > [crio.image]
	I1206 10:45:05.474895  399286 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1206 10:45:05.474899  399286 command_runner.go:130] > # default_transport = "docker://"
	I1206 10:45:05.474913  399286 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1206 10:45:05.474920  399286 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1206 10:45:05.474924  399286 command_runner.go:130] > # global_auth_file = ""
	I1206 10:45:05.474929  399286 command_runner.go:130] > # The image used to instantiate infra containers.
	I1206 10:45:05.474938  399286 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:45:05.474943  399286 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1206 10:45:05.474952  399286 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1206 10:45:05.474959  399286 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1206 10:45:05.474967  399286 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:45:05.474972  399286 command_runner.go:130] > # pause_image_auth_file = ""
	I1206 10:45:05.474977  399286 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1206 10:45:05.474984  399286 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1206 10:45:05.474994  399286 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1206 10:45:05.475000  399286 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1206 10:45:05.475009  399286 command_runner.go:130] > # pause_command = "/pause"
	I1206 10:45:05.475015  399286 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1206 10:45:05.475021  399286 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1206 10:45:05.475030  399286 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1206 10:45:05.475036  399286 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1206 10:45:05.475044  399286 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1206 10:45:05.475051  399286 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1206 10:45:05.475058  399286 command_runner.go:130] > # pinned_images = [
	I1206 10:45:05.475061  399286 command_runner.go:130] > # ]
	I1206 10:45:05.475067  399286 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1206 10:45:05.475074  399286 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1206 10:45:05.475083  399286 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1206 10:45:05.475090  399286 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1206 10:45:05.475098  399286 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1206 10:45:05.475104  399286 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1206 10:45:05.475110  399286 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1206 10:45:05.475120  399286 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1206 10:45:05.475129  399286 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1206 10:45:05.475138  399286 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1206 10:45:05.475145  399286 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1206 10:45:05.475150  399286 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1206 10:45:05.475156  399286 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1206 10:45:05.475165  399286 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1206 10:45:05.475169  399286 command_runner.go:130] > # changing them here.
	I1206 10:45:05.475176  399286 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1206 10:45:05.475183  399286 command_runner.go:130] > # insecure_registries = [
	I1206 10:45:05.475186  399286 command_runner.go:130] > # ]
	I1206 10:45:05.475193  399286 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1206 10:45:05.475201  399286 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1206 10:45:05.475208  399286 command_runner.go:130] > # image_volumes = "mkdir"
	I1206 10:45:05.475214  399286 command_runner.go:130] > # Temporary directory to use for storing big files
	I1206 10:45:05.475220  399286 command_runner.go:130] > # big_files_temporary_dir = ""
	I1206 10:45:05.475226  399286 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1206 10:45:05.475236  399286 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1206 10:45:05.475241  399286 command_runner.go:130] > # auto_reload_registries = false
	I1206 10:45:05.475247  399286 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1206 10:45:05.475257  399286 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1206 10:45:05.475267  399286 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1206 10:45:05.475271  399286 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1206 10:45:05.475277  399286 command_runner.go:130] > # The mode of short name resolution.
	I1206 10:45:05.475284  399286 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1206 10:45:05.475293  399286 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1206 10:45:05.475298  399286 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1206 10:45:05.475314  399286 command_runner.go:130] > # short_name_mode = "enforcing"
	I1206 10:45:05.475321  399286 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1206 10:45:05.475327  399286 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1206 10:45:05.475335  399286 command_runner.go:130] > # oci_artifact_mount_support = true
	I1206 10:45:05.475343  399286 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1206 10:45:05.475349  399286 command_runner.go:130] > # CNI plugins.
	I1206 10:45:05.475353  399286 command_runner.go:130] > [crio.network]
	I1206 10:45:05.475360  399286 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1206 10:45:05.475368  399286 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1206 10:45:05.475386  399286 command_runner.go:130] > # cni_default_network = ""
	I1206 10:45:05.475398  399286 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1206 10:45:05.475407  399286 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1206 10:45:05.475413  399286 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1206 10:45:05.475419  399286 command_runner.go:130] > # plugin_dirs = [
	I1206 10:45:05.475424  399286 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1206 10:45:05.475429  399286 command_runner.go:130] > # ]
	I1206 10:45:05.475434  399286 command_runner.go:130] > # List of included pod metrics.
	I1206 10:45:05.475441  399286 command_runner.go:130] > # included_pod_metrics = [
	I1206 10:45:05.475445  399286 command_runner.go:130] > # ]
	I1206 10:45:05.475451  399286 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1206 10:45:05.475457  399286 command_runner.go:130] > [crio.metrics]
	I1206 10:45:05.475463  399286 command_runner.go:130] > # Globally enable or disable metrics support.
	I1206 10:45:05.475467  399286 command_runner.go:130] > # enable_metrics = false
	I1206 10:45:05.475472  399286 command_runner.go:130] > # Specify enabled metrics collectors.
	I1206 10:45:05.475476  399286 command_runner.go:130] > # Per default all metrics are enabled.
	I1206 10:45:05.475483  399286 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1206 10:45:05.475490  399286 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1206 10:45:05.475497  399286 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1206 10:45:05.475501  399286 command_runner.go:130] > # metrics_collectors = [
	I1206 10:45:05.475505  399286 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1206 10:45:05.475510  399286 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1206 10:45:05.475518  399286 command_runner.go:130] > # 	"containers_oom_total",
	I1206 10:45:05.475522  399286 command_runner.go:130] > # 	"processes_defunct",
	I1206 10:45:05.475528  399286 command_runner.go:130] > # 	"operations_total",
	I1206 10:45:05.475533  399286 command_runner.go:130] > # 	"operations_latency_seconds",
	I1206 10:45:05.475540  399286 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1206 10:45:05.475547  399286 command_runner.go:130] > # 	"operations_errors_total",
	I1206 10:45:05.475554  399286 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1206 10:45:05.475559  399286 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1206 10:45:05.475564  399286 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1206 10:45:05.475571  399286 command_runner.go:130] > # 	"image_pulls_success_total",
	I1206 10:45:05.475576  399286 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1206 10:45:05.475583  399286 command_runner.go:130] > # 	"containers_oom_count_total",
	I1206 10:45:05.475590  399286 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1206 10:45:05.475602  399286 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1206 10:45:05.475607  399286 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1206 10:45:05.475610  399286 command_runner.go:130] > # ]
	I1206 10:45:05.475616  399286 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1206 10:45:05.475620  399286 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1206 10:45:05.475626  399286 command_runner.go:130] > # The port on which the metrics server will listen.
	I1206 10:45:05.475639  399286 command_runner.go:130] > # metrics_port = 9090
	I1206 10:45:05.475646  399286 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1206 10:45:05.475649  399286 command_runner.go:130] > # metrics_socket = ""
	I1206 10:45:05.475657  399286 command_runner.go:130] > # The certificate for the secure metrics server.
	I1206 10:45:05.475670  399286 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1206 10:45:05.475677  399286 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1206 10:45:05.475691  399286 command_runner.go:130] > # certificate on any modification event.
	I1206 10:45:05.475695  399286 command_runner.go:130] > # metrics_cert = ""
	I1206 10:45:05.475703  399286 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1206 10:45:05.475708  399286 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1206 10:45:05.475712  399286 command_runner.go:130] > # metrics_key = ""
	I1206 10:45:05.475720  399286 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1206 10:45:05.475727  399286 command_runner.go:130] > [crio.tracing]
	I1206 10:45:05.475732  399286 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1206 10:45:05.475737  399286 command_runner.go:130] > # enable_tracing = false
	I1206 10:45:05.475748  399286 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1206 10:45:05.475753  399286 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1206 10:45:05.475767  399286 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1206 10:45:05.475772  399286 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1206 10:45:05.475781  399286 command_runner.go:130] > # CRI-O NRI configuration.
	I1206 10:45:05.475784  399286 command_runner.go:130] > [crio.nri]
	I1206 10:45:05.475789  399286 command_runner.go:130] > # Globally enable or disable NRI.
	I1206 10:45:05.475792  399286 command_runner.go:130] > # enable_nri = true
	I1206 10:45:05.475799  399286 command_runner.go:130] > # NRI socket to listen on.
	I1206 10:45:05.475804  399286 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1206 10:45:05.475811  399286 command_runner.go:130] > # NRI plugin directory to use.
	I1206 10:45:05.475817  399286 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1206 10:45:05.475825  399286 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1206 10:45:05.475830  399286 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1206 10:45:05.475835  399286 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1206 10:45:05.475891  399286 command_runner.go:130] > # nri_disable_connections = false
	I1206 10:45:05.475901  399286 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1206 10:45:05.475906  399286 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1206 10:45:05.475911  399286 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1206 10:45:05.475918  399286 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1206 10:45:05.475923  399286 command_runner.go:130] > # NRI default validator configuration.
	I1206 10:45:05.475933  399286 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1206 10:45:05.475940  399286 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1206 10:45:05.475946  399286 command_runner.go:130] > # can be restricted/rejected:
	I1206 10:45:05.475950  399286 command_runner.go:130] > # - OCI hook injection
	I1206 10:45:05.475958  399286 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1206 10:45:05.475964  399286 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1206 10:45:05.475969  399286 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1206 10:45:05.475976  399286 command_runner.go:130] > # - adjustment of linux namespaces
	I1206 10:45:05.475983  399286 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1206 10:45:05.475990  399286 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1206 10:45:05.476000  399286 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1206 10:45:05.476005  399286 command_runner.go:130] > #
	I1206 10:45:05.476012  399286 command_runner.go:130] > # [crio.nri.default_validator]
	I1206 10:45:05.476020  399286 command_runner.go:130] > # nri_enable_default_validator = false
	I1206 10:45:05.476026  399286 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1206 10:45:05.476035  399286 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1206 10:45:05.476042  399286 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1206 10:45:05.476048  399286 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1206 10:45:05.476056  399286 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1206 10:45:05.476061  399286 command_runner.go:130] > # nri_validator_required_plugins = [
	I1206 10:45:05.476064  399286 command_runner.go:130] > # ]
	I1206 10:45:05.476070  399286 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1206 10:45:05.476079  399286 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1206 10:45:05.476083  399286 command_runner.go:130] > [crio.stats]
	I1206 10:45:05.476089  399286 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1206 10:45:05.476095  399286 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1206 10:45:05.476102  399286 command_runner.go:130] > # stats_collection_period = 0
	I1206 10:45:05.476109  399286 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1206 10:45:05.476119  399286 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1206 10:45:05.476124  399286 command_runner.go:130] > # collection_period = 0
	I1206 10:45:05.476211  399286 cni.go:84] Creating CNI manager for ""
	I1206 10:45:05.476226  399286 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:45:05.476254  399286 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 10:45:05.476282  399286 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-196950 NodeName:functional-196950 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:45:05.476417  399286 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-196950"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:45:05.476505  399286 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 10:45:05.483758  399286 command_runner.go:130] > kubeadm
	I1206 10:45:05.483779  399286 command_runner.go:130] > kubectl
	I1206 10:45:05.483784  399286 command_runner.go:130] > kubelet
	I1206 10:45:05.484784  399286 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:45:05.484852  399286 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:45:05.492924  399286 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1206 10:45:05.506239  399286 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 10:45:05.519506  399286 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1206 10:45:05.533524  399286 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:45:05.537326  399286 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1206 10:45:05.537418  399286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:45:05.647140  399286 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:45:05.721344  399286 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950 for IP: 192.168.49.2
	I1206 10:45:05.721367  399286 certs.go:195] generating shared ca certs ...
	I1206 10:45:05.721384  399286 certs.go:227] acquiring lock for ca certs: {Name:mke2ec61a37b6f3abbcbeb9abd23d6a19d011dd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:45:05.721593  399286 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.key
	I1206 10:45:05.721667  399286 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.key
	I1206 10:45:05.721683  399286 certs.go:257] generating profile certs ...
	I1206 10:45:05.721813  399286 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.key
	I1206 10:45:05.721910  399286 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.key.a77b39a6
	I1206 10:45:05.721994  399286 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.key
	I1206 10:45:05.722034  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1206 10:45:05.722057  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1206 10:45:05.722073  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1206 10:45:05.722118  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1206 10:45:05.722158  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1206 10:45:05.722199  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1206 10:45:05.722217  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1206 10:45:05.722228  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1206 10:45:05.722301  399286 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855.pem (1338 bytes)
	W1206 10:45:05.722365  399286 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855_empty.pem, impossibly tiny 0 bytes
	I1206 10:45:05.722388  399286 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 10:45:05.722448  399286 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem (1082 bytes)
	I1206 10:45:05.722502  399286 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:45:05.722537  399286 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem (1679 bytes)
	I1206 10:45:05.722611  399286 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem (1708 bytes)
	I1206 10:45:05.722670  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:45:05.722691  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855.pem -> /usr/share/ca-certificates/364855.pem
	I1206 10:45:05.722718  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem -> /usr/share/ca-certificates/3648552.pem
	I1206 10:45:05.723349  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:45:05.743026  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 10:45:05.763126  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:45:05.783337  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 10:45:05.802756  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 10:45:05.821457  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 10:45:05.839993  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:45:05.858402  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 10:45:05.876528  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:45:05.894729  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855.pem --> /usr/share/ca-certificates/364855.pem (1338 bytes)
	I1206 10:45:05.912947  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem --> /usr/share/ca-certificates/3648552.pem (1708 bytes)
	I1206 10:45:05.931356  399286 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:45:05.945284  399286 ssh_runner.go:195] Run: openssl version
	I1206 10:45:05.951573  399286 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1206 10:45:05.951648  399286 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:45:05.959293  399286 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:45:05.967114  399286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:45:05.970832  399286 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  6 10:26 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:45:05.971103  399286 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:26 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:45:05.971168  399286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:45:06.014236  399286 command_runner.go:130] > b5213941
	I1206 10:45:06.014768  399286 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:45:06.023097  399286 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364855.pem
	I1206 10:45:06.030984  399286 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364855.pem /etc/ssl/certs/364855.pem
	I1206 10:45:06.039316  399286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364855.pem
	I1206 10:45:06.043457  399286 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  6 10:36 /usr/share/ca-certificates/364855.pem
	I1206 10:45:06.043549  399286 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:36 /usr/share/ca-certificates/364855.pem
	I1206 10:45:06.043624  399286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364855.pem
	I1206 10:45:06.084760  399286 command_runner.go:130] > 51391683
	I1206 10:45:06.084914  399286 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 10:45:06.092772  399286 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648552.pem
	I1206 10:45:06.100248  399286 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648552.pem /etc/ssl/certs/3648552.pem
	I1206 10:45:06.107970  399286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648552.pem
	I1206 10:45:06.112031  399286 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  6 10:36 /usr/share/ca-certificates/3648552.pem
	I1206 10:45:06.112134  399286 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:36 /usr/share/ca-certificates/3648552.pem
	I1206 10:45:06.112229  399286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648552.pem
	I1206 10:45:06.152822  399286 command_runner.go:130] > 3ec20f2e
	I1206 10:45:06.153315  399286 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 10:45:06.161105  399286 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:45:06.165043  399286 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:45:06.165068  399286 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1206 10:45:06.165075  399286 command_runner.go:130] > Device: 259,1	Inode: 1826360     Links: 1
	I1206 10:45:06.165081  399286 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 10:45:06.165087  399286 command_runner.go:130] > Access: 2025-12-06 10:40:58.003190996 +0000
	I1206 10:45:06.165092  399286 command_runner.go:130] > Modify: 2025-12-06 10:36:53.916464205 +0000
	I1206 10:45:06.165098  399286 command_runner.go:130] > Change: 2025-12-06 10:36:53.916464205 +0000
	I1206 10:45:06.165103  399286 command_runner.go:130] >  Birth: 2025-12-06 10:36:53.916464205 +0000
	I1206 10:45:06.165195  399286 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 10:45:06.207365  399286 command_runner.go:130] > Certificate will not expire
	I1206 10:45:06.207850  399286 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 10:45:06.248448  399286 command_runner.go:130] > Certificate will not expire
	I1206 10:45:06.248932  399286 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 10:45:06.289656  399286 command_runner.go:130] > Certificate will not expire
	I1206 10:45:06.290116  399286 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 10:45:06.330828  399286 command_runner.go:130] > Certificate will not expire
	I1206 10:45:06.331412  399286 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 10:45:06.372096  399286 command_runner.go:130] > Certificate will not expire
	I1206 10:45:06.372595  399286 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 10:45:06.413596  399286 command_runner.go:130] > Certificate will not expire
	I1206 10:45:06.414056  399286 kubeadm.go:401] StartCluster: {Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:45:06.414151  399286 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:45:06.414217  399286 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:45:06.442677  399286 cri.go:89] found id: ""
	I1206 10:45:06.442751  399286 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:45:06.449938  399286 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1206 10:45:06.449962  399286 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1206 10:45:06.449969  399286 command_runner.go:130] > /var/lib/minikube/etcd:
	I1206 10:45:06.450931  399286 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 10:45:06.450952  399286 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 10:45:06.451032  399286 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 10:45:06.459080  399286 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:45:06.459618  399286 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-196950" does not appear in /home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 10:45:06.459742  399286 kubeconfig.go:62] /home/jenkins/minikube-integration/22047-362985/kubeconfig needs updating (will repair): [kubeconfig missing "functional-196950" cluster setting kubeconfig missing "functional-196950" context setting]
	I1206 10:45:06.460016  399286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/kubeconfig: {Name:mk779651834cfbdc6f0b5e8f5a9abc0f05106181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:45:06.460484  399286 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 10:45:06.460638  399286 kapi.go:59] client config for functional-196950: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.crt", KeyFile:"/home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.key", CAFile:"/home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 10:45:06.461238  399286 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1206 10:45:06.461268  399286 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1206 10:45:06.461280  399286 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1206 10:45:06.461291  399286 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1206 10:45:06.461295  399286 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1206 10:45:06.461337  399286 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1206 10:45:06.461637  399286 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 10:45:06.473548  399286 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1206 10:45:06.473584  399286 kubeadm.go:602] duration metric: took 22.626231ms to restartPrimaryControlPlane
	I1206 10:45:06.473594  399286 kubeadm.go:403] duration metric: took 59.544914ms to StartCluster
	I1206 10:45:06.473609  399286 settings.go:142] acquiring lock: {Name:mk789e01bfd4ab9fa1e2a8415fa99b570b26926a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:45:06.473671  399286 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 10:45:06.474312  399286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/kubeconfig: {Name:mk779651834cfbdc6f0b5e8f5a9abc0f05106181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:45:06.474518  399286 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 10:45:06.474963  399286 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 10:45:06.475042  399286 addons.go:70] Setting storage-provisioner=true in profile "functional-196950"
	I1206 10:45:06.475066  399286 addons.go:239] Setting addon storage-provisioner=true in "functional-196950"
	I1206 10:45:06.475092  399286 host.go:66] Checking if "functional-196950" exists ...
	I1206 10:45:06.475912  399286 cli_runner.go:164] Run: docker container inspect functional-196950 --format={{.State.Status}}
	I1206 10:45:06.476264  399286 config.go:182] Loaded profile config "functional-196950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:45:06.476354  399286 addons.go:70] Setting default-storageclass=true in profile "functional-196950"
	I1206 10:45:06.476394  399286 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-196950"
	I1206 10:45:06.476791  399286 cli_runner.go:164] Run: docker container inspect functional-196950 --format={{.State.Status}}
	I1206 10:45:06.481213  399286 out.go:179] * Verifying Kubernetes components...
	I1206 10:45:06.484465  399286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:45:06.517764  399286 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 10:45:06.517930  399286 kapi.go:59] client config for functional-196950: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.crt", KeyFile:"/home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.key", CAFile:"/home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 10:45:06.518202  399286 addons.go:239] Setting addon default-storageclass=true in "functional-196950"
	I1206 10:45:06.518232  399286 host.go:66] Checking if "functional-196950" exists ...
	I1206 10:45:06.518684  399286 cli_runner.go:164] Run: docker container inspect functional-196950 --format={{.State.Status}}
	I1206 10:45:06.522254  399286 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 10:45:06.525206  399286 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:06.525232  399286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 10:45:06.525299  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:06.551517  399286 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:06.551540  399286 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 10:45:06.551605  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:06.570954  399286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:45:06.593327  399286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:45:06.685314  399286 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:45:06.722168  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:06.737572  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:07.472063  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:07.472098  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:07.472124  399286 retry.go:31] will retry after 153.213078ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:07.472168  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:07.472179  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:07.472186  399286 retry.go:31] will retry after 247.840204ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:07.472279  399286 node_ready.go:35] waiting up to 6m0s for node "functional-196950" to be "Ready" ...
	I1206 10:45:07.472418  399286 type.go:168] "Request Body" body=""
	I1206 10:45:07.472509  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:07.472828  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:07.626184  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:07.684274  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:07.688010  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:07.688045  399286 retry.go:31] will retry after 503.005947ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:07.720209  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:07.781565  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:07.785057  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:07.785089  399286 retry.go:31] will retry after 443.254463ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:07.973439  399286 type.go:168] "Request Body" body=""
	I1206 10:45:07.973529  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:07.974023  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:08.191658  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:08.229200  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:08.274450  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:08.282645  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:08.282730  399286 retry.go:31] will retry after 342.048952ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:08.327096  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:08.327147  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:08.327166  399286 retry.go:31] will retry after 504.811759ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:08.473470  399286 type.go:168] "Request Body" body=""
	I1206 10:45:08.473573  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:08.473913  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:08.625427  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:08.684176  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:08.687968  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:08.688010  399286 retry.go:31] will retry after 1.261411242s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:08.832256  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:08.891180  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:08.894801  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:08.894836  399286 retry.go:31] will retry after 546.340513ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:08.973077  399286 type.go:168] "Request Body" body=""
	I1206 10:45:08.973155  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:08.973522  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:09.442273  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:09.472729  399286 type.go:168] "Request Body" body=""
	I1206 10:45:09.472803  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:09.473092  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:09.473139  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:09.506571  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:09.510870  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:09.510955  399286 retry.go:31] will retry after 985.837399ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:09.950606  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:09.973212  399286 type.go:168] "Request Body" body=""
	I1206 10:45:09.973298  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:09.973577  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:10.030286  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:10.030402  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:10.030452  399286 retry.go:31] will retry after 829.97822ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:10.472519  399286 type.go:168] "Request Body" body=""
	I1206 10:45:10.472588  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:10.472971  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:10.497156  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:10.582698  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:10.582757  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:10.582779  399286 retry.go:31] will retry after 2.303396874s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:10.861265  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:10.923027  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:10.923124  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:10.923150  399286 retry.go:31] will retry after 2.722563752s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:10.973315  399286 type.go:168] "Request Body" body=""
	I1206 10:45:10.973396  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:10.973700  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:11.473530  399286 type.go:168] "Request Body" body=""
	I1206 10:45:11.473608  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:11.474011  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:11.474073  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:11.972906  399286 type.go:168] "Request Body" body=""
	I1206 10:45:11.972979  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:11.973246  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:12.472617  399286 type.go:168] "Request Body" body=""
	I1206 10:45:12.472696  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:12.473071  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:12.886451  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:12.946418  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:12.951114  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:12.951151  399286 retry.go:31] will retry after 2.435253477s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:12.973196  399286 type.go:168] "Request Body" body=""
	I1206 10:45:12.973267  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:12.973628  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:13.473384  399286 type.go:168] "Request Body" body=""
	I1206 10:45:13.473455  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:13.473719  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:13.646250  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:13.707346  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:13.707418  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:13.707442  399286 retry.go:31] will retry after 2.81497333s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:13.972564  399286 type.go:168] "Request Body" body=""
	I1206 10:45:13.972648  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:13.972980  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:13.973040  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:14.472608  399286 type.go:168] "Request Body" body=""
	I1206 10:45:14.472684  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:14.473066  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:14.972534  399286 type.go:168] "Request Body" body=""
	I1206 10:45:14.972625  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:14.972955  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:15.386668  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:15.447515  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:15.447555  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:15.447573  399286 retry.go:31] will retry after 2.327509257s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:15.472847  399286 type.go:168] "Request Body" body=""
	I1206 10:45:15.472922  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:15.473272  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:15.973226  399286 type.go:168] "Request Body" body=""
	I1206 10:45:15.973305  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:15.973654  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:15.973708  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:16.473465  399286 type.go:168] "Request Body" body=""
	I1206 10:45:16.473539  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:16.473810  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:16.523188  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:16.580568  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:16.584128  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:16.584161  399286 retry.go:31] will retry after 3.565207529s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:16.972816  399286 type.go:168] "Request Body" body=""
	I1206 10:45:16.972893  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:16.973236  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:17.472948  399286 type.go:168] "Request Body" body=""
	I1206 10:45:17.473028  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:17.473355  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:17.775942  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:17.833742  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:17.838032  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:17.838073  399286 retry.go:31] will retry after 9.046125485s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:17.973259  399286 type.go:168] "Request Body" body=""
	I1206 10:45:17.973333  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:17.973605  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:18.473464  399286 type.go:168] "Request Body" body=""
	I1206 10:45:18.473544  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:18.473887  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:18.473936  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:18.972571  399286 type.go:168] "Request Body" body=""
	I1206 10:45:18.972668  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:18.973005  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:19.472497  399286 type.go:168] "Request Body" body=""
	I1206 10:45:19.472590  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:19.472870  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:19.972598  399286 type.go:168] "Request Body" body=""
	I1206 10:45:19.972674  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:19.972970  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:20.150467  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:20.215833  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:20.215885  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:20.215905  399286 retry.go:31] will retry after 9.222024728s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:20.473247  399286 type.go:168] "Request Body" body=""
	I1206 10:45:20.473322  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:20.473670  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:20.973445  399286 type.go:168] "Request Body" body=""
	I1206 10:45:20.973528  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:20.973801  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:20.973861  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:21.472555  399286 type.go:168] "Request Body" body=""
	I1206 10:45:21.472664  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:21.473020  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:21.972799  399286 type.go:168] "Request Body" body=""
	I1206 10:45:21.972877  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:21.973219  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:22.472497  399286 type.go:168] "Request Body" body=""
	I1206 10:45:22.472576  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:22.472904  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:22.972589  399286 type.go:168] "Request Body" body=""
	I1206 10:45:22.972674  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:22.973015  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:23.472753  399286 type.go:168] "Request Body" body=""
	I1206 10:45:23.472835  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:23.473181  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:23.473243  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:23.972733  399286 type.go:168] "Request Body" body=""
	I1206 10:45:23.972804  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:23.973079  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:24.472742  399286 type.go:168] "Request Body" body=""
	I1206 10:45:24.472825  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:24.473193  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:24.972805  399286 type.go:168] "Request Body" body=""
	I1206 10:45:24.972890  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:24.973299  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:25.473054  399286 type.go:168] "Request Body" body=""
	I1206 10:45:25.473127  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:25.473403  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:25.473453  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:25.972761  399286 type.go:168] "Request Body" body=""
	I1206 10:45:25.972834  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:25.973177  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:26.473056  399286 type.go:168] "Request Body" body=""
	I1206 10:45:26.473132  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:26.473476  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:26.884353  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:26.943184  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:26.947029  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:26.947062  399286 retry.go:31] will retry after 13.756266916s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:26.973239  399286 type.go:168] "Request Body" body=""
	I1206 10:45:26.973309  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:26.973589  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:27.473507  399286 type.go:168] "Request Body" body=""
	I1206 10:45:27.473585  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:27.473949  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:27.474006  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:27.972689  399286 type.go:168] "Request Body" body=""
	I1206 10:45:27.972763  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:27.973145  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:28.472835  399286 type.go:168] "Request Body" body=""
	I1206 10:45:28.472909  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:28.473194  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:28.972604  399286 type.go:168] "Request Body" body=""
	I1206 10:45:28.972682  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:28.972972  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:29.438741  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:29.473252  399286 type.go:168] "Request Body" body=""
	I1206 10:45:29.473342  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:29.473619  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:29.500011  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:29.500052  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:29.500073  399286 retry.go:31] will retry after 11.458105653s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:29.972514  399286 type.go:168] "Request Body" body=""
	I1206 10:45:29.972601  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:29.972925  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:29.972975  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:30.472573  399286 type.go:168] "Request Body" body=""
	I1206 10:45:30.472647  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:30.472967  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:30.972595  399286 type.go:168] "Request Body" body=""
	I1206 10:45:30.972703  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:30.973084  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:31.472784  399286 type.go:168] "Request Body" body=""
	I1206 10:45:31.472855  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:31.473199  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:31.972958  399286 type.go:168] "Request Body" body=""
	I1206 10:45:31.973040  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:31.973376  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:31.973432  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:32.473378  399286 type.go:168] "Request Body" body=""
	I1206 10:45:32.473454  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:32.473784  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:32.972445  399286 type.go:168] "Request Body" body=""
	I1206 10:45:32.972534  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:32.972822  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:33.472492  399286 type.go:168] "Request Body" body=""
	I1206 10:45:33.472570  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:33.472871  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:33.972494  399286 type.go:168] "Request Body" body=""
	I1206 10:45:33.972591  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:33.972945  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:34.472578  399286 type.go:168] "Request Body" body=""
	I1206 10:45:34.472650  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:34.473009  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:34.473064  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:34.972732  399286 type.go:168] "Request Body" body=""
	I1206 10:45:34.972808  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:34.973199  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:35.472761  399286 type.go:168] "Request Body" body=""
	I1206 10:45:35.472857  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:35.473192  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:35.972534  399286 type.go:168] "Request Body" body=""
	I1206 10:45:35.972619  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:35.972903  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:36.472813  399286 type.go:168] "Request Body" body=""
	I1206 10:45:36.472898  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:36.473245  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:36.473300  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:36.972930  399286 type.go:168] "Request Body" body=""
	I1206 10:45:36.973016  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:36.973389  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:37.473181  399286 type.go:168] "Request Body" body=""
	I1206 10:45:37.473253  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:37.473531  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:37.973319  399286 type.go:168] "Request Body" body=""
	I1206 10:45:37.973403  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:37.973730  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:38.472498  399286 type.go:168] "Request Body" body=""
	I1206 10:45:38.472583  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:38.472928  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:38.972624  399286 type.go:168] "Request Body" body=""
	I1206 10:45:38.972703  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:38.973126  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:38.973176  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:39.472568  399286 type.go:168] "Request Body" body=""
	I1206 10:45:39.472665  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:39.472987  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:39.972728  399286 type.go:168] "Request Body" body=""
	I1206 10:45:39.972805  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:39.973175  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:40.473384  399286 type.go:168] "Request Body" body=""
	I1206 10:45:40.473456  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:40.473714  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:40.704276  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:40.766032  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:40.766082  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:40.766102  399286 retry.go:31] will retry after 12.834175432s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:40.958402  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:40.972905  399286 type.go:168] "Request Body" body=""
	I1206 10:45:40.972992  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:40.973301  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:40.973353  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:41.030830  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:41.030878  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:41.030900  399286 retry.go:31] will retry after 14.333484689s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:41.472501  399286 type.go:168] "Request Body" body=""
	I1206 10:45:41.472600  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:41.472944  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:41.972853  399286 type.go:168] "Request Body" body=""
	I1206 10:45:41.972920  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:41.973187  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:42.472557  399286 type.go:168] "Request Body" body=""
	I1206 10:45:42.472636  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:42.472968  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:42.972558  399286 type.go:168] "Request Body" body=""
	I1206 10:45:42.972635  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:42.972937  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:43.472504  399286 type.go:168] "Request Body" body=""
	I1206 10:45:43.472579  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:43.472849  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:43.472893  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:43.972555  399286 type.go:168] "Request Body" body=""
	I1206 10:45:43.972629  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:43.972940  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:44.472608  399286 type.go:168] "Request Body" body=""
	I1206 10:45:44.472707  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:44.473088  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:44.972724  399286 type.go:168] "Request Body" body=""
	I1206 10:45:44.972794  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:44.973077  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:45.472782  399286 type.go:168] "Request Body" body=""
	I1206 10:45:45.472865  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:45.473241  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:45.473304  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:45.972826  399286 type.go:168] "Request Body" body=""
	I1206 10:45:45.972906  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:45.973262  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:46.473108  399286 type.go:168] "Request Body" body=""
	I1206 10:45:46.473196  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:46.473467  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:46.973436  399286 type.go:168] "Request Body" body=""
	I1206 10:45:46.973508  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:46.973863  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:47.472551  399286 type.go:168] "Request Body" body=""
	I1206 10:45:47.472626  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:47.472969  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:47.972653  399286 type.go:168] "Request Body" body=""
	I1206 10:45:47.972724  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:47.972985  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:47.973026  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:48.472555  399286 type.go:168] "Request Body" body=""
	I1206 10:45:48.472631  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:48.472979  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:48.972567  399286 type.go:168] "Request Body" body=""
	I1206 10:45:48.972648  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:48.973011  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:49.472610  399286 type.go:168] "Request Body" body=""
	I1206 10:45:49.472682  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:49.473011  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:49.972715  399286 type.go:168] "Request Body" body=""
	I1206 10:45:49.972814  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:49.973135  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:49.973192  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:50.472573  399286 type.go:168] "Request Body" body=""
	I1206 10:45:50.472649  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:50.473004  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:50.972718  399286 type.go:168] "Request Body" body=""
	I1206 10:45:50.972788  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:50.973064  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:51.472737  399286 type.go:168] "Request Body" body=""
	I1206 10:45:51.472812  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:51.473132  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:51.972870  399286 type.go:168] "Request Body" body=""
	I1206 10:45:51.972960  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:51.973314  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:51.973366  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:52.472505  399286 type.go:168] "Request Body" body=""
	I1206 10:45:52.472573  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:52.472847  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:52.972578  399286 type.go:168] "Request Body" body=""
	I1206 10:45:52.972662  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:52.973040  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:53.472618  399286 type.go:168] "Request Body" body=""
	I1206 10:45:53.472697  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:53.473027  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:53.600459  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:53.661736  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:53.665292  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:53.665323  399286 retry.go:31] will retry after 22.486760262s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:53.972617  399286 type.go:168] "Request Body" body=""
	I1206 10:45:53.972697  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:53.972964  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:54.472589  399286 type.go:168] "Request Body" body=""
	I1206 10:45:54.472671  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:54.473035  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:54.473093  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:54.972750  399286 type.go:168] "Request Body" body=""
	I1206 10:45:54.972837  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:54.973175  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:55.364722  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:55.425632  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:55.425678  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:55.425713  399286 retry.go:31] will retry after 12.507538253s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:55.472809  399286 type.go:168] "Request Body" body=""
	I1206 10:45:55.472887  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:55.473184  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:55.972552  399286 type.go:168] "Request Body" body=""
	I1206 10:45:55.972650  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:55.972997  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:56.472967  399286 type.go:168] "Request Body" body=""
	I1206 10:45:56.473058  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:56.473382  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:56.473432  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:56.973296  399286 type.go:168] "Request Body" body=""
	I1206 10:45:56.973367  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:56.973664  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:57.473479  399286 type.go:168] "Request Body" body=""
	I1206 10:45:57.473548  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:57.473911  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:57.972639  399286 type.go:168] "Request Body" body=""
	I1206 10:45:57.972714  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:57.973013  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:58.472518  399286 type.go:168] "Request Body" body=""
	I1206 10:45:58.472589  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:58.472883  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:58.972593  399286 type.go:168] "Request Body" body=""
	I1206 10:45:58.972667  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:58.973050  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:58.973107  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:59.472758  399286 type.go:168] "Request Body" body=""
	I1206 10:45:59.472833  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:59.473125  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:59.972559  399286 type.go:168] "Request Body" body=""
	I1206 10:45:59.972680  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:59.973026  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:00.472759  399286 type.go:168] "Request Body" body=""
	I1206 10:46:00.472862  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:00.473243  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:00.972535  399286 type.go:168] "Request Body" body=""
	I1206 10:46:00.972611  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:00.972922  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:01.472611  399286 type.go:168] "Request Body" body=""
	I1206 10:46:01.472687  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:01.473449  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:01.473514  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:01.973455  399286 type.go:168] "Request Body" body=""
	I1206 10:46:01.973535  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:01.973907  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:02.472594  399286 type.go:168] "Request Body" body=""
	I1206 10:46:02.472664  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:02.472938  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:02.972637  399286 type.go:168] "Request Body" body=""
	I1206 10:46:02.972721  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:02.973106  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:03.472580  399286 type.go:168] "Request Body" body=""
	I1206 10:46:03.472660  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:03.473047  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:03.972706  399286 type.go:168] "Request Body" body=""
	I1206 10:46:03.972777  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:03.973074  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:03.973125  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:04.472781  399286 type.go:168] "Request Body" body=""
	I1206 10:46:04.472867  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:04.473199  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:04.972938  399286 type.go:168] "Request Body" body=""
	I1206 10:46:04.973022  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:04.973345  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:05.473133  399286 type.go:168] "Request Body" body=""
	I1206 10:46:05.473203  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:05.473463  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:05.973200  399286 type.go:168] "Request Body" body=""
	I1206 10:46:05.973298  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:05.973625  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:05.973682  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:06.472493  399286 type.go:168] "Request Body" body=""
	I1206 10:46:06.472614  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:06.473110  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:06.972852  399286 type.go:168] "Request Body" body=""
	I1206 10:46:06.972929  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:06.973194  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:07.472870  399286 type.go:168] "Request Body" body=""
	I1206 10:46:07.472948  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:07.473250  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:07.933511  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:46:07.973023  399286 type.go:168] "Request Body" body=""
	I1206 10:46:07.973096  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:07.973373  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:07.994514  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:46:07.994569  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:46:07.994603  399286 retry.go:31] will retry after 24.706041433s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:46:08.473166  399286 type.go:168] "Request Body" body=""
	I1206 10:46:08.473240  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:08.473542  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:08.473592  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:08.973437  399286 type.go:168] "Request Body" body=""
	I1206 10:46:08.973586  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:08.973915  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:09.472565  399286 type.go:168] "Request Body" body=""
	I1206 10:46:09.472644  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:09.472997  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:09.972519  399286 type.go:168] "Request Body" body=""
	I1206 10:46:09.972596  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:09.972890  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:10.472617  399286 type.go:168] "Request Body" body=""
	I1206 10:46:10.472695  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:10.473054  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:10.972589  399286 type.go:168] "Request Body" body=""
	I1206 10:46:10.972671  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:10.973007  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:10.973060  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:11.472630  399286 type.go:168] "Request Body" body=""
	I1206 10:46:11.472699  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:11.473093  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:11.972992  399286 type.go:168] "Request Body" body=""
	I1206 10:46:11.973065  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:11.973384  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:12.472990  399286 type.go:168] "Request Body" body=""
	I1206 10:46:12.473133  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:12.473477  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:12.973203  399286 type.go:168] "Request Body" body=""
	I1206 10:46:12.973288  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:12.973547  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:12.973597  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:13.473422  399286 type.go:168] "Request Body" body=""
	I1206 10:46:13.473507  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:13.473830  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:13.972530  399286 type.go:168] "Request Body" body=""
	I1206 10:46:13.972634  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:13.972982  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:14.472697  399286 type.go:168] "Request Body" body=""
	I1206 10:46:14.472780  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:14.473132  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:14.972546  399286 type.go:168] "Request Body" body=""
	I1206 10:46:14.972620  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:14.972954  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:15.472543  399286 type.go:168] "Request Body" body=""
	I1206 10:46:15.472620  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:15.472950  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:15.473077  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:15.972516  399286 type.go:168] "Request Body" body=""
	I1206 10:46:15.972589  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:15.972880  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:16.153289  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:46:16.211194  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:46:16.214959  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:46:16.214991  399286 retry.go:31] will retry after 16.737835039s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:46:16.473494  399286 type.go:168] "Request Body" body=""
	I1206 10:46:16.473573  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:16.473903  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:16.972909  399286 type.go:168] "Request Body" body=""
	I1206 10:46:16.972986  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:16.973336  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:17.473112  399286 type.go:168] "Request Body" body=""
	I1206 10:46:17.473189  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:17.473465  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:17.473508  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:17.973266  399286 type.go:168] "Request Body" body=""
	I1206 10:46:17.973344  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:17.973710  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:18.472499  399286 type.go:168] "Request Body" body=""
	I1206 10:46:18.472586  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:18.472953  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:18.972650  399286 type.go:168] "Request Body" body=""
	I1206 10:46:18.972719  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:18.973068  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:19.472565  399286 type.go:168] "Request Body" body=""
	I1206 10:46:19.472638  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:19.472948  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:19.972573  399286 type.go:168] "Request Body" body=""
	I1206 10:46:19.972649  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:19.972985  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:19.973044  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:20.472488  399286 type.go:168] "Request Body" body=""
	I1206 10:46:20.472560  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:20.472892  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:20.972569  399286 type.go:168] "Request Body" body=""
	I1206 10:46:20.972648  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:20.973000  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:21.472659  399286 type.go:168] "Request Body" body=""
	I1206 10:46:21.472741  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:21.473075  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:21.972911  399286 type.go:168] "Request Body" body=""
	I1206 10:46:21.972985  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:21.973292  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:21.973342  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:22.473039  399286 type.go:168] "Request Body" body=""
	I1206 10:46:22.473118  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:22.473451  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:22.973318  399286 type.go:168] "Request Body" body=""
	I1206 10:46:22.973392  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:22.973733  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:23.472438  399286 type.go:168] "Request Body" body=""
	I1206 10:46:23.472509  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:23.472819  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:23.972535  399286 type.go:168] "Request Body" body=""
	I1206 10:46:23.972611  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:23.972929  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:24.472539  399286 type.go:168] "Request Body" body=""
	I1206 10:46:24.472621  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:24.472971  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:24.473042  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:24.972529  399286 type.go:168] "Request Body" body=""
	I1206 10:46:24.972598  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:24.972883  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:25.472559  399286 type.go:168] "Request Body" body=""
	I1206 10:46:25.472685  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:25.473033  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:25.972748  399286 type.go:168] "Request Body" body=""
	I1206 10:46:25.972833  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:25.973182  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:26.472983  399286 type.go:168] "Request Body" body=""
	I1206 10:46:26.473065  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:26.473364  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:26.473432  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:26.973366  399286 type.go:168] "Request Body" body=""
	I1206 10:46:26.973451  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:26.973797  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:27.472534  399286 type.go:168] "Request Body" body=""
	I1206 10:46:27.472617  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:27.472962  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:27.972662  399286 type.go:168] "Request Body" body=""
	I1206 10:46:27.972735  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:27.973174  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:28.472540  399286 type.go:168] "Request Body" body=""
	I1206 10:46:28.472653  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:28.472975  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:28.972588  399286 type.go:168] "Request Body" body=""
	I1206 10:46:28.972684  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:28.973062  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:28.973117  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:29.472612  399286 type.go:168] "Request Body" body=""
	I1206 10:46:29.472691  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:29.473027  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:29.972588  399286 type.go:168] "Request Body" body=""
	I1206 10:46:29.972665  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:29.973045  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:30.472639  399286 type.go:168] "Request Body" body=""
	I1206 10:46:30.472714  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:30.473023  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:30.972504  399286 type.go:168] "Request Body" body=""
	I1206 10:46:30.972575  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:30.972851  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:31.472534  399286 type.go:168] "Request Body" body=""
	I1206 10:46:31.472619  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:31.472983  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:31.473045  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:31.973254  399286 type.go:168] "Request Body" body=""
	I1206 10:46:31.973345  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:31.973713  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:32.472451  399286 type.go:168] "Request Body" body=""
	I1206 10:46:32.472529  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:32.472822  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:32.701368  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:46:32.764470  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:46:32.764520  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:46:32.764620  399286 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 10:46:32.953898  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:46:32.973480  399286 type.go:168] "Request Body" body=""
	I1206 10:46:32.973551  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:32.973819  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:33.013712  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:46:33.017430  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:46:33.017463  399286 retry.go:31] will retry after 30.205234164s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:46:33.472638  399286 type.go:168] "Request Body" body=""
	I1206 10:46:33.472723  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:33.473069  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:33.473124  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:33.972761  399286 type.go:168] "Request Body" body=""
	I1206 10:46:33.972847  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:33.973196  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:34.472595  399286 type.go:168] "Request Body" body=""
	I1206 10:46:34.472673  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:34.473015  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:34.972624  399286 type.go:168] "Request Body" body=""
	I1206 10:46:34.972712  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:34.973072  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:35.472562  399286 type.go:168] "Request Body" body=""
	I1206 10:46:35.472631  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:35.472898  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:35.972594  399286 type.go:168] "Request Body" body=""
	I1206 10:46:35.972691  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:35.973118  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:35.973178  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:36.472537  399286 type.go:168] "Request Body" body=""
	I1206 10:46:36.472612  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:36.472993  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:36.972887  399286 type.go:168] "Request Body" body=""
	I1206 10:46:36.972979  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:36.973257  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:37.472570  399286 type.go:168] "Request Body" body=""
	I1206 10:46:37.472647  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:37.473006  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:37.972720  399286 type.go:168] "Request Body" body=""
	I1206 10:46:37.972795  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:37.973159  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:37.973230  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:38.472827  399286 type.go:168] "Request Body" body=""
	I1206 10:46:38.472914  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:38.473245  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:38.972991  399286 type.go:168] "Request Body" body=""
	I1206 10:46:38.973109  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:38.973430  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:39.473083  399286 type.go:168] "Request Body" body=""
	I1206 10:46:39.473156  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:39.473487  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:39.973120  399286 type.go:168] "Request Body" body=""
	I1206 10:46:39.973195  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:39.973475  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:39.973520  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:40.473349  399286 type.go:168] "Request Body" body=""
	I1206 10:46:40.473426  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:40.473832  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:40.972536  399286 type.go:168] "Request Body" body=""
	I1206 10:46:40.972611  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:40.972967  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:41.472528  399286 type.go:168] "Request Body" body=""
	I1206 10:46:41.472606  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:41.472897  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:41.972839  399286 type.go:168] "Request Body" body=""
	I1206 10:46:41.972921  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:41.973277  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:42.473121  399286 type.go:168] "Request Body" body=""
	I1206 10:46:42.473205  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:42.473535  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:42.473598  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:42.973295  399286 type.go:168] "Request Body" body=""
	I1206 10:46:42.973369  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:42.973633  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:43.474561  399286 type.go:168] "Request Body" body=""
	I1206 10:46:43.474632  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:43.474989  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:43.972751  399286 type.go:168] "Request Body" body=""
	I1206 10:46:43.972830  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:43.973164  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:44.472525  399286 type.go:168] "Request Body" body=""
	I1206 10:46:44.472603  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:44.472924  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:44.972616  399286 type.go:168] "Request Body" body=""
	I1206 10:46:44.972690  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:44.972993  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:44.973041  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:45.472572  399286 type.go:168] "Request Body" body=""
	I1206 10:46:45.472654  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:45.473032  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:45.972746  399286 type.go:168] "Request Body" body=""
	I1206 10:46:45.972818  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:45.973081  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:46.473093  399286 type.go:168] "Request Body" body=""
	I1206 10:46:46.473169  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:46.473502  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:46.972476  399286 type.go:168] "Request Body" body=""
	I1206 10:46:46.972548  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:46.972884  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:47.472504  399286 type.go:168] "Request Body" body=""
	I1206 10:46:47.472574  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:47.472853  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:47.472901  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:47.972549  399286 type.go:168] "Request Body" body=""
	I1206 10:46:47.972622  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:47.972948  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:48.472667  399286 type.go:168] "Request Body" body=""
	I1206 10:46:48.472745  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:48.473110  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:48.972503  399286 type.go:168] "Request Body" body=""
	I1206 10:46:48.972577  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:48.972841  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:49.472552  399286 type.go:168] "Request Body" body=""
	I1206 10:46:49.472628  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:49.472955  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:49.473012  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:49.972575  399286 type.go:168] "Request Body" body=""
	I1206 10:46:49.972653  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:49.972977  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:50.472510  399286 type.go:168] "Request Body" body=""
	I1206 10:46:50.472585  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:50.472943  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:50.972627  399286 type.go:168] "Request Body" body=""
	I1206 10:46:50.972741  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:50.973101  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:51.472815  399286 type.go:168] "Request Body" body=""
	I1206 10:46:51.472914  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:51.473280  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:51.473354  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:51.973315  399286 type.go:168] "Request Body" body=""
	I1206 10:46:51.973390  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:51.973667  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:52.473496  399286 type.go:168] "Request Body" body=""
	I1206 10:46:52.473597  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:52.473928  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:52.972621  399286 type.go:168] "Request Body" body=""
	I1206 10:46:52.972697  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:52.973027  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:53.472511  399286 type.go:168] "Request Body" body=""
	I1206 10:46:53.472581  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:53.472850  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:53.972553  399286 type.go:168] "Request Body" body=""
	I1206 10:46:53.972631  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:53.973006  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:53.973079  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:54.472752  399286 type.go:168] "Request Body" body=""
	I1206 10:46:54.472832  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:54.473199  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:54.972887  399286 type.go:168] "Request Body" body=""
	I1206 10:46:54.972975  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:54.973260  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:55.472573  399286 type.go:168] "Request Body" body=""
	I1206 10:46:55.472649  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:55.473014  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:55.972568  399286 type.go:168] "Request Body" body=""
	I1206 10:46:55.972665  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:55.973053  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:55.973130  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:56.472795  399286 type.go:168] "Request Body" body=""
	I1206 10:46:56.472877  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:56.473146  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:56.972884  399286 type.go:168] "Request Body" body=""
	I1206 10:46:56.972967  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:56.973286  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:57.473158  399286 type.go:168] "Request Body" body=""
	I1206 10:46:57.473263  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:57.473709  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:57.973446  399286 type.go:168] "Request Body" body=""
	I1206 10:46:57.973526  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:57.973793  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:57.973842  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:58.472543  399286 type.go:168] "Request Body" body=""
	I1206 10:46:58.472635  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:58.473049  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:58.972820  399286 type.go:168] "Request Body" body=""
	I1206 10:46:58.972904  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:58.973235  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:59.472499  399286 type.go:168] "Request Body" body=""
	I1206 10:46:59.472588  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:59.472924  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:59.972544  399286 type.go:168] "Request Body" body=""
	I1206 10:46:59.972618  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:59.972934  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:00.472668  399286 type.go:168] "Request Body" body=""
	I1206 10:47:00.472777  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:00.473113  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:00.473167  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:00.972603  399286 type.go:168] "Request Body" body=""
	I1206 10:47:00.972685  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:00.973038  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:01.472631  399286 type.go:168] "Request Body" body=""
	I1206 10:47:01.472707  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:01.473290  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:01.973256  399286 type.go:168] "Request Body" body=""
	I1206 10:47:01.973331  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:01.973589  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:02.473484  399286 type.go:168] "Request Body" body=""
	I1206 10:47:02.473561  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:02.473896  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:02.473948  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:02.972573  399286 type.go:168] "Request Body" body=""
	I1206 10:47:02.972648  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:02.972960  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:03.223490  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:47:03.285940  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:47:03.285995  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:47:03.286078  399286 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 10:47:03.289299  399286 out.go:179] * Enabled addons: 
	I1206 10:47:03.293166  399286 addons.go:530] duration metric: took 1m56.818196786s for enable addons: enabled=[]
	I1206 10:47:03.473269  399286 type.go:168] "Request Body" body=""
	I1206 10:47:03.473338  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:03.473598  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:03.973428  399286 type.go:168] "Request Body" body=""
	I1206 10:47:03.973501  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:03.973827  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:04.472627  399286 type.go:168] "Request Body" body=""
	I1206 10:47:04.472722  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:04.473116  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:04.972512  399286 type.go:168] "Request Body" body=""
	I1206 10:47:04.972582  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:04.972864  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:04.972905  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:05.472854  399286 type.go:168] "Request Body" body=""
	I1206 10:47:05.472960  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:05.473416  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:05.973522  399286 type.go:168] "Request Body" body=""
	I1206 10:47:05.973609  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:05.973972  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:06.472918  399286 type.go:168] "Request Body" body=""
	I1206 10:47:06.472988  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:06.473277  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:06.973211  399286 type.go:168] "Request Body" body=""
	I1206 10:47:06.973295  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:06.973657  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:06.973714  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:07.473520  399286 type.go:168] "Request Body" body=""
	I1206 10:47:07.473603  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:07.473953  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:07.972629  399286 type.go:168] "Request Body" body=""
	I1206 10:47:07.972706  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:07.973057  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:08.472555  399286 type.go:168] "Request Body" body=""
	I1206 10:47:08.472632  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:08.472984  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:08.972724  399286 type.go:168] "Request Body" body=""
	I1206 10:47:08.972820  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:08.973196  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:09.472563  399286 type.go:168] "Request Body" body=""
	I1206 10:47:09.472642  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:09.472956  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:09.473017  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:09.972573  399286 type.go:168] "Request Body" body=""
	I1206 10:47:09.972651  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:09.973014  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:10.472724  399286 type.go:168] "Request Body" body=""
	I1206 10:47:10.472800  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:10.473133  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:10.972508  399286 type.go:168] "Request Body" body=""
	I1206 10:47:10.972579  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:10.972853  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:11.472590  399286 type.go:168] "Request Body" body=""
	I1206 10:47:11.472669  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:11.473026  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:11.473100  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:11.972903  399286 type.go:168] "Request Body" body=""
	I1206 10:47:11.972976  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:11.973268  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:12.472510  399286 type.go:168] "Request Body" body=""
	I1206 10:47:12.472587  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:12.472925  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:12.972540  399286 type.go:168] "Request Body" body=""
	I1206 10:47:12.972620  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:12.972953  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:13.472625  399286 type.go:168] "Request Body" body=""
	I1206 10:47:13.472698  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:13.473024  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:13.972681  399286 type.go:168] "Request Body" body=""
	I1206 10:47:13.972766  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:13.973081  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:13.973132  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:14.472631  399286 type.go:168] "Request Body" body=""
	I1206 10:47:14.472714  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:14.472985  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:14.972530  399286 type.go:168] "Request Body" body=""
	I1206 10:47:14.972629  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:14.972947  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:15.472648  399286 type.go:168] "Request Body" body=""
	I1206 10:47:15.472724  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:15.472994  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:15.972549  399286 type.go:168] "Request Body" body=""
	I1206 10:47:15.972625  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:15.972986  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:16.472737  399286 type.go:168] "Request Body" body=""
	I1206 10:47:16.472818  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:16.473143  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:16.473210  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:16.972875  399286 type.go:168] "Request Body" body=""
	I1206 10:47:16.972953  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:16.973285  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:17.473095  399286 type.go:168] "Request Body" body=""
	I1206 10:47:17.473172  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:17.473522  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:17.973337  399286 type.go:168] "Request Body" body=""
	I1206 10:47:17.973427  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:17.973777  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:18.473404  399286 type.go:168] "Request Body" body=""
	I1206 10:47:18.473476  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:18.473741  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:18.473792  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:18.972507  399286 type.go:168] "Request Body" body=""
	I1206 10:47:18.972607  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:18.972936  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:19.472645  399286 type.go:168] "Request Body" body=""
	I1206 10:47:19.472722  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:19.473093  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:19.972508  399286 type.go:168] "Request Body" body=""
	I1206 10:47:19.972581  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:19.972852  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:20.472581  399286 type.go:168] "Request Body" body=""
	I1206 10:47:20.472666  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:20.472982  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:20.972583  399286 type.go:168] "Request Body" body=""
	I1206 10:47:20.972663  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:20.973010  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:20.973076  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:21.472734  399286 type.go:168] "Request Body" body=""
	I1206 10:47:21.472809  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:21.473085  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:21.972896  399286 type.go:168] "Request Body" body=""
	I1206 10:47:21.972994  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:21.973342  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:22.473130  399286 type.go:168] "Request Body" body=""
	I1206 10:47:22.473211  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:22.473543  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:22.973319  399286 type.go:168] "Request Body" body=""
	I1206 10:47:22.973388  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:22.973646  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:22.973687  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:23.473432  399286 type.go:168] "Request Body" body=""
	I1206 10:47:23.473517  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:23.473913  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:23.972485  399286 type.go:168] "Request Body" body=""
	I1206 10:47:23.972564  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:23.972906  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:24.472560  399286 type.go:168] "Request Body" body=""
	I1206 10:47:24.472635  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:24.472973  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:24.972571  399286 type.go:168] "Request Body" body=""
	I1206 10:47:24.972646  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:24.973006  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:25.472733  399286 type.go:168] "Request Body" body=""
	I1206 10:47:25.472817  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:25.473171  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:25.473229  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:25.972533  399286 type.go:168] "Request Body" body=""
	I1206 10:47:25.972606  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:25.972871  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:26.472886  399286 type.go:168] "Request Body" body=""
	I1206 10:47:26.472960  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:26.473323  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:26.972934  399286 type.go:168] "Request Body" body=""
	I1206 10:47:26.973008  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:26.973356  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:27.473095  399286 type.go:168] "Request Body" body=""
	I1206 10:47:27.473172  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:27.473531  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:27.473599  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:27.973360  399286 type.go:168] "Request Body" body=""
	I1206 10:47:27.973441  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:27.973782  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:28.472538  399286 type.go:168] "Request Body" body=""
	I1206 10:47:28.472619  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:28.472967  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:28.972640  399286 type.go:168] "Request Body" body=""
	I1206 10:47:28.972710  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:28.973005  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:29.472569  399286 type.go:168] "Request Body" body=""
	I1206 10:47:29.472650  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:29.472987  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:29.972580  399286 type.go:168] "Request Body" body=""
	I1206 10:47:29.972672  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:29.973033  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:29.973089  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:30.472734  399286 type.go:168] "Request Body" body=""
	I1206 10:47:30.472805  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:30.473130  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:30.972629  399286 type.go:168] "Request Body" body=""
	I1206 10:47:30.972708  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:30.973010  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:31.472578  399286 type.go:168] "Request Body" body=""
	I1206 10:47:31.472654  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:31.472987  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:31.972822  399286 type.go:168] "Request Body" body=""
	I1206 10:47:31.972897  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:31.973160  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:31.973200  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:32.472914  399286 type.go:168] "Request Body" body=""
	I1206 10:47:32.473004  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:32.473347  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:32.973159  399286 type.go:168] "Request Body" body=""
	I1206 10:47:32.973233  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:32.973581  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:33.473360  399286 type.go:168] "Request Body" body=""
	I1206 10:47:33.473433  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:33.473718  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:33.973498  399286 type.go:168] "Request Body" body=""
	I1206 10:47:33.973577  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:33.973949  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:33.974029  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:34.472526  399286 type.go:168] "Request Body" body=""
	I1206 10:47:34.472608  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:34.472947  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:34.972533  399286 type.go:168] "Request Body" body=""
	I1206 10:47:34.972628  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:34.972989  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:35.472565  399286 type.go:168] "Request Body" body=""
	I1206 10:47:35.472646  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:35.473023  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:35.972740  399286 type.go:168] "Request Body" body=""
	I1206 10:47:35.972818  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:35.973158  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:36.473170  399286 type.go:168] "Request Body" body=""
	I1206 10:47:36.473254  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:36.473528  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:36.473569  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:36.972525  399286 type.go:168] "Request Body" body=""
	I1206 10:47:36.972602  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:36.972938  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:37.472576  399286 type.go:168] "Request Body" body=""
	I1206 10:47:37.472656  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:37.473000  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:37.972555  399286 type.go:168] "Request Body" body=""
	I1206 10:47:37.972627  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:37.972895  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:38.472579  399286 type.go:168] "Request Body" body=""
	I1206 10:47:38.472663  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:38.473008  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:38.972569  399286 type.go:168] "Request Body" body=""
	I1206 10:47:38.972649  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:38.973012  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:38.973070  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:39.472526  399286 type.go:168] "Request Body" body=""
	I1206 10:47:39.472602  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:39.472864  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:39.972558  399286 type.go:168] "Request Body" body=""
	I1206 10:47:39.972636  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:39.972965  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:40.472567  399286 type.go:168] "Request Body" body=""
	I1206 10:47:40.472639  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:40.472972  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:40.972512  399286 type.go:168] "Request Body" body=""
	I1206 10:47:40.972588  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:40.972883  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:41.472541  399286 type.go:168] "Request Body" body=""
	I1206 10:47:41.472626  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:41.472980  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:41.473038  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:41.972863  399286 type.go:168] "Request Body" body=""
	I1206 10:47:41.972939  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:41.973307  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:42.472600  399286 type.go:168] "Request Body" body=""
	I1206 10:47:42.472680  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:42.472974  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:42.972681  399286 type.go:168] "Request Body" body=""
	I1206 10:47:42.972759  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:42.973100  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:43.472601  399286 type.go:168] "Request Body" body=""
	I1206 10:47:43.472694  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:43.473056  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:43.473116  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:43.972500  399286 type.go:168] "Request Body" body=""
	I1206 10:47:43.972579  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:43.972899  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:44.472587  399286 type.go:168] "Request Body" body=""
	I1206 10:47:44.472675  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:44.473014  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:44.972574  399286 type.go:168] "Request Body" body=""
	I1206 10:47:44.972651  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:44.973031  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:45.472655  399286 type.go:168] "Request Body" body=""
	I1206 10:47:45.472726  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:45.473026  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:45.972718  399286 type.go:168] "Request Body" body=""
	I1206 10:47:45.972800  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:45.973152  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:45.973210  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:46.472509  399286 type.go:168] "Request Body" body=""
	I1206 10:47:46.472600  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:46.472959  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:46.972791  399286 type.go:168] "Request Body" body=""
	I1206 10:47:46.972860  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:46.973128  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:47.472798  399286 type.go:168] "Request Body" body=""
	I1206 10:47:47.472874  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:47.473208  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:47.972568  399286 type.go:168] "Request Body" body=""
	I1206 10:47:47.972648  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:47.972980  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:48.473410  399286 type.go:168] "Request Body" body=""
	I1206 10:47:48.473482  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:48.473747  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:48.473789  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:48.973486  399286 type.go:168] "Request Body" body=""
	I1206 10:47:48.973564  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:48.973890  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:49.472605  399286 type.go:168] "Request Body" body=""
	I1206 10:47:49.472724  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:49.473137  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:49.972518  399286 type.go:168] "Request Body" body=""
	I1206 10:47:49.972592  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:49.972867  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:50.472548  399286 type.go:168] "Request Body" body=""
	I1206 10:47:50.472628  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:50.472960  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:50.972581  399286 type.go:168] "Request Body" body=""
	I1206 10:47:50.972656  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:50.972999  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:50.973058  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:51.472529  399286 type.go:168] "Request Body" body=""
	I1206 10:47:51.472601  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:51.472873  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:51.972855  399286 type.go:168] "Request Body" body=""
	I1206 10:47:51.972934  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:51.973251  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:52.472527  399286 type.go:168] "Request Body" body=""
	I1206 10:47:52.472603  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:52.472925  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:52.972632  399286 type.go:168] "Request Body" body=""
	I1206 10:47:52.972710  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:52.973009  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:53.472551  399286 type.go:168] "Request Body" body=""
	I1206 10:47:53.472635  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:53.473004  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:53.473083  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:53.972778  399286 type.go:168] "Request Body" body=""
	I1206 10:47:53.972868  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:53.973278  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:54.472595  399286 type.go:168] "Request Body" body=""
	I1206 10:47:54.472680  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:54.473008  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:54.972544  399286 type.go:168] "Request Body" body=""
	I1206 10:47:54.972624  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:54.972997  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:55.472544  399286 type.go:168] "Request Body" body=""
	I1206 10:47:55.472633  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:55.472967  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:55.972686  399286 type.go:168] "Request Body" body=""
	I1206 10:47:55.972759  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:55.973084  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:55.973129  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:56.472527  399286 type.go:168] "Request Body" body=""
	I1206 10:47:56.472600  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:56.472935  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:56.972607  399286 type.go:168] "Request Body" body=""
	I1206 10:47:56.972688  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:56.973052  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:57.472495  399286 type.go:168] "Request Body" body=""
	I1206 10:47:57.472571  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:57.472885  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:57.972579  399286 type.go:168] "Request Body" body=""
	I1206 10:47:57.972653  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:57.972989  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:58.472576  399286 type.go:168] "Request Body" body=""
	I1206 10:47:58.472654  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:58.472981  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:58.473038  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:58.972524  399286 type.go:168] "Request Body" body=""
	I1206 10:47:58.972595  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:58.972920  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:59.472623  399286 type.go:168] "Request Body" body=""
	I1206 10:47:59.472702  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:59.473058  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:59.972774  399286 type.go:168] "Request Body" body=""
	I1206 10:47:59.972856  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:59.973198  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:00.472879  399286 type.go:168] "Request Body" body=""
	I1206 10:48:00.472963  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:00.473302  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:00.473350  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:00.973100  399286 type.go:168] "Request Body" body=""
	I1206 10:48:00.973182  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:00.973500  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:01.473348  399286 type.go:168] "Request Body" body=""
	I1206 10:48:01.473426  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:01.473749  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:01.972487  399286 type.go:168] "Request Body" body=""
	I1206 10:48:01.972565  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:01.972839  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:02.472524  399286 type.go:168] "Request Body" body=""
	I1206 10:48:02.472604  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:02.472916  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:02.972566  399286 type.go:168] "Request Body" body=""
	I1206 10:48:02.972640  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:02.972945  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:02.972990  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:03.472551  399286 type.go:168] "Request Body" body=""
	I1206 10:48:03.472641  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:03.472970  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:03.972530  399286 type.go:168] "Request Body" body=""
	I1206 10:48:03.972607  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:03.972945  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:04.472651  399286 type.go:168] "Request Body" body=""
	I1206 10:48:04.472730  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:04.473079  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:04.972502  399286 type.go:168] "Request Body" body=""
	I1206 10:48:04.972576  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:04.972860  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:05.472568  399286 type.go:168] "Request Body" body=""
	I1206 10:48:05.472646  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:05.473022  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:05.473077  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:05.972744  399286 type.go:168] "Request Body" body=""
	I1206 10:48:05.972834  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:05.973199  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:06.473241  399286 type.go:168] "Request Body" body=""
	I1206 10:48:06.473315  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:06.473604  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:06.972611  399286 type.go:168] "Request Body" body=""
	I1206 10:48:06.972691  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:06.972992  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:07.472569  399286 type.go:168] "Request Body" body=""
	I1206 10:48:07.472658  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:07.473030  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:07.972580  399286 type.go:168] "Request Body" body=""
	I1206 10:48:07.972659  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:07.972925  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:07.972966  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:08.472573  399286 type.go:168] "Request Body" body=""
	I1206 10:48:08.472665  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:08.472999  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:08.972712  399286 type.go:168] "Request Body" body=""
	I1206 10:48:08.972805  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:08.973106  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:09.472510  399286 type.go:168] "Request Body" body=""
	I1206 10:48:09.472584  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:09.472910  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:09.972623  399286 type.go:168] "Request Body" body=""
	I1206 10:48:09.972697  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:09.973067  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:09.973119  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:10.472815  399286 type.go:168] "Request Body" body=""
	I1206 10:48:10.472886  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:10.473224  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:10.972551  399286 type.go:168] "Request Body" body=""
	I1206 10:48:10.972627  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:10.972947  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:11.472597  399286 type.go:168] "Request Body" body=""
	I1206 10:48:11.472675  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:11.473029  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:11.972904  399286 type.go:168] "Request Body" body=""
	I1206 10:48:11.972981  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:11.973328  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:11.973383  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:12.472840  399286 type.go:168] "Request Body" body=""
	I1206 10:48:12.472917  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:12.473212  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:12.972545  399286 type.go:168] "Request Body" body=""
	I1206 10:48:12.972620  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:12.972959  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:13.472663  399286 type.go:168] "Request Body" body=""
	I1206 10:48:13.472740  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:13.473115  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:13.972809  399286 type.go:168] "Request Body" body=""
	I1206 10:48:13.972882  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:13.973148  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:14.472560  399286 type.go:168] "Request Body" body=""
	I1206 10:48:14.472633  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:14.472974  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:14.473026  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:14.972547  399286 type.go:168] "Request Body" body=""
	I1206 10:48:14.972633  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:14.972981  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:15.472494  399286 type.go:168] "Request Body" body=""
	I1206 10:48:15.472572  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:15.472888  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:15.972557  399286 type.go:168] "Request Body" body=""
	I1206 10:48:15.972632  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:15.973009  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:16.472796  399286 type.go:168] "Request Body" body=""
	I1206 10:48:16.472875  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:16.473235  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:16.473293  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:16.972964  399286 type.go:168] "Request Body" body=""
	I1206 10:48:16.973036  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:16.973307  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:17.473074  399286 type.go:168] "Request Body" body=""
	I1206 10:48:17.473147  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:17.473485  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:17.973295  399286 type.go:168] "Request Body" body=""
	I1206 10:48:17.973378  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:17.973725  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:18.473438  399286 type.go:168] "Request Body" body=""
	I1206 10:48:18.473505  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:18.473841  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:18.473920  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:18.972621  399286 type.go:168] "Request Body" body=""
	I1206 10:48:18.972695  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:18.973065  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:19.472598  399286 type.go:168] "Request Body" body=""
	I1206 10:48:19.472705  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:19.473114  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:19.972515  399286 type.go:168] "Request Body" body=""
	I1206 10:48:19.972585  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:19.972856  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:20.472548  399286 type.go:168] "Request Body" body=""
	I1206 10:48:20.472625  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:20.472958  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:20.972576  399286 type.go:168] "Request Body" body=""
	I1206 10:48:20.972660  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:20.973023  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:20.973083  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:21.472615  399286 type.go:168] "Request Body" body=""
	I1206 10:48:21.472686  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:21.472963  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:21.972979  399286 type.go:168] "Request Body" body=""
	I1206 10:48:21.973061  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:21.973404  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:22.473200  399286 type.go:168] "Request Body" body=""
	I1206 10:48:22.473283  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:22.473635  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:22.973360  399286 type.go:168] "Request Body" body=""
	I1206 10:48:22.973441  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:22.973782  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:22.973843  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:23.472499  399286 type.go:168] "Request Body" body=""
	I1206 10:48:23.472580  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:23.472916  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:23.972556  399286 type.go:168] "Request Body" body=""
	I1206 10:48:23.972636  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:23.972975  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:24.472447  399286 type.go:168] "Request Body" body=""
	I1206 10:48:24.472514  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:24.472774  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:24.972471  399286 type.go:168] "Request Body" body=""
	I1206 10:48:24.972545  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:24.972884  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:25.472578  399286 type.go:168] "Request Body" body=""
	I1206 10:48:25.472667  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:25.473021  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:25.473076  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:25.972593  399286 type.go:168] "Request Body" body=""
	I1206 10:48:25.972669  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:25.972945  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:26.472488  399286 type.go:168] "Request Body" body=""
	I1206 10:48:26.472562  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:26.472906  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:26.972576  399286 type.go:168] "Request Body" body=""
	I1206 10:48:26.972660  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:26.973014  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:27.472549  399286 type.go:168] "Request Body" body=""
	I1206 10:48:27.472616  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:27.472894  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:27.972571  399286 type.go:168] "Request Body" body=""
	I1206 10:48:27.972663  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:27.973010  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:27.973066  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:28.472755  399286 type.go:168] "Request Body" body=""
	I1206 10:48:28.472833  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:28.473174  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:28.972507  399286 type.go:168] "Request Body" body=""
	I1206 10:48:28.972584  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:28.972913  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:29.472601  399286 type.go:168] "Request Body" body=""
	I1206 10:48:29.472680  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:29.473059  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:29.972567  399286 type.go:168] "Request Body" body=""
	I1206 10:48:29.972642  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:29.972943  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:30.472524  399286 type.go:168] "Request Body" body=""
	I1206 10:48:30.472616  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:30.472907  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:30.472969  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:30.972572  399286 type.go:168] "Request Body" body=""
	I1206 10:48:30.972656  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:30.973011  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:31.472727  399286 type.go:168] "Request Body" body=""
	I1206 10:48:31.472811  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:31.473170  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:31.972852  399286 type.go:168] "Request Body" body=""
	I1206 10:48:31.972934  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:31.973257  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:32.472954  399286 type.go:168] "Request Body" body=""
	I1206 10:48:32.473032  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:32.473401  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:32.473457  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:32.973252  399286 type.go:168] "Request Body" body=""
	I1206 10:48:32.973327  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:32.973655  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:33.473425  399286 type.go:168] "Request Body" body=""
	I1206 10:48:33.473493  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:33.473760  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:33.972474  399286 type.go:168] "Request Body" body=""
	I1206 10:48:33.972572  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:33.972878  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:34.472627  399286 type.go:168] "Request Body" body=""
	I1206 10:48:34.472747  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:34.473114  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:34.972774  399286 type.go:168] "Request Body" body=""
	I1206 10:48:34.972854  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:34.973228  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:34.973282  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:35.472597  399286 type.go:168] "Request Body" body=""
	I1206 10:48:35.472674  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:35.473044  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:35.972740  399286 type.go:168] "Request Body" body=""
	I1206 10:48:35.972817  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:35.973175  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:36.473166  399286 type.go:168] "Request Body" body=""
	I1206 10:48:36.473240  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:36.473506  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:36.972504  399286 type.go:168] "Request Body" body=""
	I1206 10:48:36.972595  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:36.972967  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:37.472704  399286 type.go:168] "Request Body" body=""
	I1206 10:48:37.472781  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:37.473151  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:37.473210  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:37.972584  399286 type.go:168] "Request Body" body=""
	I1206 10:48:37.972659  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:37.972980  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:38.472674  399286 type.go:168] "Request Body" body=""
	I1206 10:48:38.472751  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:38.473096  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:38.972809  399286 type.go:168] "Request Body" body=""
	I1206 10:48:38.972894  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:38.973234  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:39.472512  399286 type.go:168] "Request Body" body=""
	I1206 10:48:39.472591  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:39.472888  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:39.972573  399286 type.go:168] "Request Body" body=""
	I1206 10:48:39.972654  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:39.973005  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:39.973062  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:40.472729  399286 type.go:168] "Request Body" body=""
	I1206 10:48:40.472807  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:40.473118  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:40.972789  399286 type.go:168] "Request Body" body=""
	I1206 10:48:40.972865  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:40.973175  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:41.472555  399286 type.go:168] "Request Body" body=""
	I1206 10:48:41.472632  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:41.473019  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:41.972805  399286 type.go:168] "Request Body" body=""
	I1206 10:48:41.972889  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:41.973231  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:41.973283  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:42.472645  399286 type.go:168] "Request Body" body=""
	I1206 10:48:42.472724  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:42.473014  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:42.972545  399286 type.go:168] "Request Body" body=""
	I1206 10:48:42.972620  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:42.972971  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:43.472630  399286 type.go:168] "Request Body" body=""
	I1206 10:48:43.472707  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:43.473070  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:43.972757  399286 type.go:168] "Request Body" body=""
	I1206 10:48:43.972837  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:43.973118  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:44.472549  399286 type.go:168] "Request Body" body=""
	I1206 10:48:44.472623  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:44.472965  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:44.473024  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:44.972697  399286 type.go:168] "Request Body" body=""
	I1206 10:48:44.972778  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:44.973152  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:45.472603  399286 type.go:168] "Request Body" body=""
	I1206 10:48:45.472680  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:45.472954  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:45.972650  399286 type.go:168] "Request Body" body=""
	I1206 10:48:45.972731  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:45.973088  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:46.472844  399286 type.go:168] "Request Body" body=""
	I1206 10:48:46.472930  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:46.473240  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:46.473286  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:46.972876  399286 type.go:168] "Request Body" body=""
	I1206 10:48:46.972956  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:46.973229  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:47.472914  399286 type.go:168] "Request Body" body=""
	I1206 10:48:47.472997  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:47.473351  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:47.973159  399286 type.go:168] "Request Body" body=""
	I1206 10:48:47.973238  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:47.973572  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:48.473300  399286 type.go:168] "Request Body" body=""
	I1206 10:48:48.473369  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:48.473631  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:48.473676  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:48.973423  399286 type.go:168] "Request Body" body=""
	I1206 10:48:48.973495  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:48.973838  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:49.472583  399286 type.go:168] "Request Body" body=""
	I1206 10:48:49.472657  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:49.472982  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:49.972501  399286 type.go:168] "Request Body" body=""
	I1206 10:48:49.972577  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:49.972905  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:50.472592  399286 type.go:168] "Request Body" body=""
	I1206 10:48:50.472663  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:50.473004  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:50.972730  399286 type.go:168] "Request Body" body=""
	I1206 10:48:50.972813  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:50.973202  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:50.973262  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:51.472848  399286 type.go:168] "Request Body" body=""
	I1206 10:48:51.472917  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:51.473221  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:51.973010  399286 type.go:168] "Request Body" body=""
	I1206 10:48:51.973086  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:51.973413  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:52.473222  399286 type.go:168] "Request Body" body=""
	I1206 10:48:52.473300  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:52.473671  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:52.973437  399286 type.go:168] "Request Body" body=""
	I1206 10:48:52.973506  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:52.973775  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:52.973815  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:53.472497  399286 type.go:168] "Request Body" body=""
	I1206 10:48:53.472572  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:53.472897  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:53.972574  399286 type.go:168] "Request Body" body=""
	I1206 10:48:53.972662  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:53.973051  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:54.472686  399286 type.go:168] "Request Body" body=""
	I1206 10:48:54.472759  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:54.473019  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:54.972701  399286 type.go:168] "Request Body" body=""
	I1206 10:48:54.972840  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:54.973196  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:55.472921  399286 type.go:168] "Request Body" body=""
	I1206 10:48:55.473005  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:55.473348  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:55.473405  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:55.973139  399286 type.go:168] "Request Body" body=""
	I1206 10:48:55.973209  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:55.973524  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:56.473496  399286 type.go:168] "Request Body" body=""
	I1206 10:48:56.473586  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:56.473930  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:56.973058  399286 type.go:168] "Request Body" body=""
	I1206 10:48:56.973167  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:56.973523  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:57.473253  399286 type.go:168] "Request Body" body=""
	I1206 10:48:57.473322  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:57.473613  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:57.473655  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:57.973400  399286 type.go:168] "Request Body" body=""
	I1206 10:48:57.973472  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:57.973805  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:58.472543  399286 type.go:168] "Request Body" body=""
	I1206 10:48:58.472624  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:58.472965  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:58.972545  399286 type.go:168] "Request Body" body=""
	I1206 10:48:58.972612  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:58.972871  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:59.472552  399286 type.go:168] "Request Body" body=""
	I1206 10:48:59.472628  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:59.472962  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:59.972576  399286 type.go:168] "Request Body" body=""
	I1206 10:48:59.972655  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:59.973033  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:59.973088  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:00.472755  399286 type.go:168] "Request Body" body=""
	I1206 10:49:00.472825  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:00.473159  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:00.972543  399286 type.go:168] "Request Body" body=""
	I1206 10:49:00.972623  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:00.972982  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:01.472701  399286 type.go:168] "Request Body" body=""
	I1206 10:49:01.472779  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:01.473107  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:01.972890  399286 type.go:168] "Request Body" body=""
	I1206 10:49:01.972966  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:01.973305  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:01.973365  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:02.473122  399286 type.go:168] "Request Body" body=""
	I1206 10:49:02.473195  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:02.473527  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:02.973299  399286 type.go:168] "Request Body" body=""
	I1206 10:49:02.973372  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:02.973717  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:03.473484  399286 type.go:168] "Request Body" body=""
	I1206 10:49:03.473561  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:03.473909  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:03.972610  399286 type.go:168] "Request Body" body=""
	I1206 10:49:03.972692  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:03.972999  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:04.472572  399286 type.go:168] "Request Body" body=""
	I1206 10:49:04.472655  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:04.473009  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:04.473069  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:04.972630  399286 type.go:168] "Request Body" body=""
	I1206 10:49:04.972705  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:04.973016  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:05.472726  399286 type.go:168] "Request Body" body=""
	I1206 10:49:05.472816  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:05.473184  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:05.972911  399286 type.go:168] "Request Body" body=""
	I1206 10:49:05.972991  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:05.973382  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:06.473278  399286 type.go:168] "Request Body" body=""
	I1206 10:49:06.473354  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:06.473638  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:06.473679  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:06.972552  399286 type.go:168] "Request Body" body=""
	I1206 10:49:06.972644  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:06.972984  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:07.472897  399286 type.go:168] "Request Body" body=""
	I1206 10:49:07.472974  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:07.473313  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:07.973068  399286 type.go:168] "Request Body" body=""
	I1206 10:49:07.973145  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:07.973511  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:08.473278  399286 type.go:168] "Request Body" body=""
	I1206 10:49:08.473354  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:08.473693  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:08.473753  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:08.972455  399286 type.go:168] "Request Body" body=""
	I1206 10:49:08.972536  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:08.972879  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:09.472560  399286 type.go:168] "Request Body" body=""
	I1206 10:49:09.472627  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:09.472882  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:09.972560  399286 type.go:168] "Request Body" body=""
	I1206 10:49:09.972633  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:09.972993  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:10.472707  399286 type.go:168] "Request Body" body=""
	I1206 10:49:10.472787  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:10.473125  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:10.972519  399286 type.go:168] "Request Body" body=""
	I1206 10:49:10.972593  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:10.972923  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:10.972987  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:11.472665  399286 type.go:168] "Request Body" body=""
	I1206 10:49:11.472741  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:11.473101  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:11.972857  399286 type.go:168] "Request Body" body=""
	I1206 10:49:11.972931  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:11.973257  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:12.472588  399286 type.go:168] "Request Body" body=""
	I1206 10:49:12.472664  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:12.472989  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:12.972552  399286 type.go:168] "Request Body" body=""
	I1206 10:49:12.972628  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:12.972971  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:12.973026  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:13.472582  399286 type.go:168] "Request Body" body=""
	I1206 10:49:13.472659  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:13.473010  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:13.972527  399286 type.go:168] "Request Body" body=""
	I1206 10:49:13.972603  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:13.972984  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:14.472669  399286 type.go:168] "Request Body" body=""
	I1206 10:49:14.472750  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:14.473111  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:14.972837  399286 type.go:168] "Request Body" body=""
	I1206 10:49:14.972917  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:14.973262  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:14.973321  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:15.472591  399286 type.go:168] "Request Body" body=""
	I1206 10:49:15.472663  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:15.472956  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:15.972563  399286 type.go:168] "Request Body" body=""
	I1206 10:49:15.972639  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:15.972991  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:16.472531  399286 type.go:168] "Request Body" body=""
	I1206 10:49:16.472616  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:16.472996  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:16.972771  399286 type.go:168] "Request Body" body=""
	I1206 10:49:16.972841  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:16.973118  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:17.472567  399286 type.go:168] "Request Body" body=""
	I1206 10:49:17.472648  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:17.472996  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:17.473050  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:17.972717  399286 type.go:168] "Request Body" body=""
	I1206 10:49:17.972792  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:17.973099  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:18.472513  399286 type.go:168] "Request Body" body=""
	I1206 10:49:18.472587  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:18.472880  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:18.972548  399286 type.go:168] "Request Body" body=""
	I1206 10:49:18.972624  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:18.972945  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:19.472558  399286 type.go:168] "Request Body" body=""
	I1206 10:49:19.472639  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:19.472985  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:19.972653  399286 type.go:168] "Request Body" body=""
	I1206 10:49:19.972729  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:19.973082  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:19.973145  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:20.472552  399286 type.go:168] "Request Body" body=""
	I1206 10:49:20.472627  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:20.472962  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:20.972548  399286 type.go:168] "Request Body" body=""
	I1206 10:49:20.972633  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:20.972967  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:21.472516  399286 type.go:168] "Request Body" body=""
	I1206 10:49:21.472590  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:21.472909  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:21.972841  399286 type.go:168] "Request Body" body=""
	I1206 10:49:21.972916  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:21.973264  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:21.973322  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:22.473122  399286 type.go:168] "Request Body" body=""
	I1206 10:49:22.473197  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:22.473559  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:22.973364  399286 type.go:168] "Request Body" body=""
	I1206 10:49:22.973440  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:22.973787  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:23.472556  399286 type.go:168] "Request Body" body=""
	I1206 10:49:23.472643  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:23.472989  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:23.972688  399286 type.go:168] "Request Body" body=""
	I1206 10:49:23.972770  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:23.973107  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:24.472802  399286 type.go:168] "Request Body" body=""
	I1206 10:49:24.472877  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:24.473193  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:24.473243  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:24.972584  399286 type.go:168] "Request Body" body=""
	I1206 10:49:24.972665  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:24.973023  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:25.472744  399286 type.go:168] "Request Body" body=""
	I1206 10:49:25.472827  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:25.473187  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:25.972875  399286 type.go:168] "Request Body" body=""
	I1206 10:49:25.972942  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:25.973212  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:26.473315  399286 type.go:168] "Request Body" body=""
	I1206 10:49:26.473401  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:26.473746  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:26.473798  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:26.972480  399286 type.go:168] "Request Body" body=""
	I1206 10:49:26.972564  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:26.972910  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:27.472459  399286 type.go:168] "Request Body" body=""
	I1206 10:49:27.472532  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:27.472791  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:27.972488  399286 type.go:168] "Request Body" body=""
	I1206 10:49:27.972566  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:27.972886  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:28.472548  399286 type.go:168] "Request Body" body=""
	I1206 10:49:28.472623  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:28.472959  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:28.972639  399286 type.go:168] "Request Body" body=""
	I1206 10:49:28.972711  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:28.973000  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:28.973048  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:29.472558  399286 type.go:168] "Request Body" body=""
	I1206 10:49:29.472637  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:29.472984  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:29.972552  399286 type.go:168] "Request Body" body=""
	I1206 10:49:29.972639  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:29.972980  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:30.472653  399286 type.go:168] "Request Body" body=""
	I1206 10:49:30.472729  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:30.473004  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:30.972581  399286 type.go:168] "Request Body" body=""
	I1206 10:49:30.972663  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:30.972997  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:31.472551  399286 type.go:168] "Request Body" body=""
	I1206 10:49:31.472633  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:31.472995  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:31.473053  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:31.972765  399286 type.go:168] "Request Body" body=""
	I1206 10:49:31.972832  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:31.973098  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:32.472547  399286 type.go:168] "Request Body" body=""
	I1206 10:49:32.472631  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:32.473016  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:32.972568  399286 type.go:168] "Request Body" body=""
	I1206 10:49:32.972645  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:32.972982  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:33.472517  399286 type.go:168] "Request Body" body=""
	I1206 10:49:33.472591  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:33.472911  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:33.972503  399286 type.go:168] "Request Body" body=""
	I1206 10:49:33.972576  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:33.972901  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:33.972964  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:34.472657  399286 type.go:168] "Request Body" body=""
	I1206 10:49:34.472734  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:34.473129  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:34.972818  399286 type.go:168] "Request Body" body=""
	I1206 10:49:34.972889  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:34.973175  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:35.472878  399286 type.go:168] "Request Body" body=""
	I1206 10:49:35.472955  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:35.473329  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:35.973094  399286 type.go:168] "Request Body" body=""
	I1206 10:49:35.973174  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:35.973494  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:35.973549  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:36.472432  399286 type.go:168] "Request Body" body=""
	I1206 10:49:36.472505  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:36.472781  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:36.972828  399286 type.go:168] "Request Body" body=""
	I1206 10:49:36.972905  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:36.973252  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:37.472563  399286 type.go:168] "Request Body" body=""
	I1206 10:49:37.472637  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:37.472994  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:37.972683  399286 type.go:168] "Request Body" body=""
	I1206 10:49:37.972763  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:37.973077  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:38.472554  399286 type.go:168] "Request Body" body=""
	I1206 10:49:38.472633  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:38.472969  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:38.473033  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:38.972729  399286 type.go:168] "Request Body" body=""
	I1206 10:49:38.972808  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:38.973142  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:39.472497  399286 type.go:168] "Request Body" body=""
	I1206 10:49:39.472571  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:39.472854  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:39.972565  399286 type.go:168] "Request Body" body=""
	I1206 10:49:39.972639  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:39.972983  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:40.472566  399286 type.go:168] "Request Body" body=""
	I1206 10:49:40.472647  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:40.472966  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:40.972679  399286 type.go:168] "Request Body" body=""
	I1206 10:49:40.972760  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:40.973065  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:40.973121  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:41.472568  399286 type.go:168] "Request Body" body=""
	I1206 10:49:41.472657  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:41.473025  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:41.972922  399286 type.go:168] "Request Body" body=""
	I1206 10:49:41.972998  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:41.973339  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:42.473053  399286 type.go:168] "Request Body" body=""
	I1206 10:49:42.473124  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:42.473408  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:42.973276  399286 type.go:168] "Request Body" body=""
	I1206 10:49:42.973355  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:42.973694  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:42.973752  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:43.473503  399286 type.go:168] "Request Body" body=""
	I1206 10:49:43.473574  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:43.473918  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:43.972599  399286 type.go:168] "Request Body" body=""
	I1206 10:49:43.972670  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:43.973000  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:44.472572  399286 type.go:168] "Request Body" body=""
	I1206 10:49:44.472649  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:44.472982  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:44.972582  399286 type.go:168] "Request Body" body=""
	I1206 10:49:44.972670  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:44.973019  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:45.472513  399286 type.go:168] "Request Body" body=""
	I1206 10:49:45.472583  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:45.472857  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:45.472901  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:45.972615  399286 type.go:168] "Request Body" body=""
	I1206 10:49:45.972695  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:45.973015  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:46.472495  399286 type.go:168] "Request Body" body=""
	I1206 10:49:46.472575  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:46.472931  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:46.972626  399286 type.go:168] "Request Body" body=""
	I1206 10:49:46.974621  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:46.974930  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:47.472626  399286 type.go:168] "Request Body" body=""
	I1206 10:49:47.472726  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:47.473070  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:47.473127  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:47.972571  399286 type.go:168] "Request Body" body=""
	I1206 10:49:47.972667  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:47.972989  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:48.472520  399286 type.go:168] "Request Body" body=""
	I1206 10:49:48.472595  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:48.472920  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:48.972585  399286 type.go:168] "Request Body" body=""
	I1206 10:49:48.972669  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:48.973030  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:49.472592  399286 type.go:168] "Request Body" body=""
	I1206 10:49:49.472671  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:49.473006  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:49.972683  399286 type.go:168] "Request Body" body=""
	I1206 10:49:49.972754  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:49.973062  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:49.973108  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:50.472567  399286 type.go:168] "Request Body" body=""
	I1206 10:49:50.472644  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:50.472979  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:50.972718  399286 type.go:168] "Request Body" body=""
	I1206 10:49:50.972795  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:50.973180  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:51.472470  399286 type.go:168] "Request Body" body=""
	I1206 10:49:51.472554  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:51.472819  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:51.972847  399286 type.go:168] "Request Body" body=""
	I1206 10:49:51.972931  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:51.973379  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:51.973433  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:52.473217  399286 type.go:168] "Request Body" body=""
	I1206 10:49:52.473304  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:52.473657  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:52.973426  399286 type.go:168] "Request Body" body=""
	I1206 10:49:52.973497  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:52.973879  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:53.472575  399286 type.go:168] "Request Body" body=""
	I1206 10:49:53.472654  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:53.473006  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:53.972732  399286 type.go:168] "Request Body" body=""
	I1206 10:49:53.972810  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:53.973150  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:54.472486  399286 type.go:168] "Request Body" body=""
	I1206 10:49:54.472557  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:54.472823  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:54.472866  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:54.972537  399286 type.go:168] "Request Body" body=""
	I1206 10:49:54.972613  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:54.972990  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:55.472700  399286 type.go:168] "Request Body" body=""
	I1206 10:49:55.472773  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:55.473120  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:55.972590  399286 type.go:168] "Request Body" body=""
	I1206 10:49:55.972662  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:55.972932  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:56.472845  399286 type.go:168] "Request Body" body=""
	I1206 10:49:56.472928  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:56.473307  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:56.473367  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:56.973002  399286 type.go:168] "Request Body" body=""
	I1206 10:49:56.973079  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:56.973419  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:57.473158  399286 type.go:168] "Request Body" body=""
	I1206 10:49:57.473232  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:57.473497  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:57.973292  399286 type.go:168] "Request Body" body=""
	I1206 10:49:57.973367  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:57.973704  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:58.473494  399286 type.go:168] "Request Body" body=""
	I1206 10:49:58.473568  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:58.473902  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:58.473963  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:58.972557  399286 type.go:168] "Request Body" body=""
	I1206 10:49:58.972629  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:58.972908  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:59.472542  399286 type.go:168] "Request Body" body=""
	I1206 10:49:59.472622  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:59.472961  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:59.972698  399286 type.go:168] "Request Body" body=""
	I1206 10:49:59.972792  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:59.973143  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:00.475191  399286 type.go:168] "Request Body" body=""
	I1206 10:50:00.475305  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:00.475740  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:00.475791  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:00.972513  399286 type.go:168] "Request Body" body=""
	I1206 10:50:00.972593  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:00.972946  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:01.472607  399286 type.go:168] "Request Body" body=""
	I1206 10:50:01.472698  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:01.473000  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:01.972891  399286 type.go:168] "Request Body" body=""
	I1206 10:50:01.972964  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:01.973246  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:02.473133  399286 type.go:168] "Request Body" body=""
	I1206 10:50:02.473209  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:02.473541  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:02.973359  399286 type.go:168] "Request Body" body=""
	I1206 10:50:02.973436  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:02.973744  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:02.973794  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:03.472437  399286 type.go:168] "Request Body" body=""
	I1206 10:50:03.472515  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:03.472786  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:03.972517  399286 type.go:168] "Request Body" body=""
	I1206 10:50:03.972617  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:03.972974  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:04.472690  399286 type.go:168] "Request Body" body=""
	I1206 10:50:04.472770  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:04.473092  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:04.972613  399286 type.go:168] "Request Body" body=""
	I1206 10:50:04.972688  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:04.973025  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:05.472589  399286 type.go:168] "Request Body" body=""
	I1206 10:50:05.472662  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:05.472985  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:05.473041  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:05.972699  399286 type.go:168] "Request Body" body=""
	I1206 10:50:05.972774  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:05.973134  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:06.472872  399286 type.go:168] "Request Body" body=""
	I1206 10:50:06.472950  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:06.473229  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:06.973317  399286 type.go:168] "Request Body" body=""
	I1206 10:50:06.973399  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:06.973730  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:07.472451  399286 type.go:168] "Request Body" body=""
	I1206 10:50:07.472529  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:07.472886  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:07.972570  399286 type.go:168] "Request Body" body=""
	I1206 10:50:07.972651  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:07.972971  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:07.973027  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:08.472555  399286 type.go:168] "Request Body" body=""
	I1206 10:50:08.472629  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:08.472968  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:08.972685  399286 type.go:168] "Request Body" body=""
	I1206 10:50:08.972768  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:08.973126  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:09.472810  399286 type.go:168] "Request Body" body=""
	I1206 10:50:09.472880  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:09.473152  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:09.972581  399286 type.go:168] "Request Body" body=""
	I1206 10:50:09.972655  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:09.973005  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:09.973065  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:10.472745  399286 type.go:168] "Request Body" body=""
	I1206 10:50:10.472826  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:10.473165  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:10.972520  399286 type.go:168] "Request Body" body=""
	I1206 10:50:10.972626  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:10.972896  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:11.472575  399286 type.go:168] "Request Body" body=""
	I1206 10:50:11.472653  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:11.472988  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:11.972973  399286 type.go:168] "Request Body" body=""
	I1206 10:50:11.973057  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:11.973408  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:11.973457  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:12.473192  399286 type.go:168] "Request Body" body=""
	I1206 10:50:12.473263  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:12.473529  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:12.973277  399286 type.go:168] "Request Body" body=""
	I1206 10:50:12.973358  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:12.973714  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:13.473443  399286 type.go:168] "Request Body" body=""
	I1206 10:50:13.473531  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:13.473882  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:13.972569  399286 type.go:168] "Request Body" body=""
	I1206 10:50:13.972666  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:13.972977  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:14.472568  399286 type.go:168] "Request Body" body=""
	I1206 10:50:14.472647  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:14.472975  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:14.473038  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:14.972576  399286 type.go:168] "Request Body" body=""
	I1206 10:50:14.972652  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:14.972994  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:15.472548  399286 type.go:168] "Request Body" body=""
	I1206 10:50:15.472616  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:15.472889  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:15.972575  399286 type.go:168] "Request Body" body=""
	I1206 10:50:15.972652  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:15.972980  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:16.472885  399286 type.go:168] "Request Body" body=""
	I1206 10:50:16.472971  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:16.473326  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:16.473380  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:16.973027  399286 type.go:168] "Request Body" body=""
	I1206 10:50:16.973096  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:16.973374  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:17.473136  399286 type.go:168] "Request Body" body=""
	I1206 10:50:17.473212  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:17.473562  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:17.973242  399286 type.go:168] "Request Body" body=""
	I1206 10:50:17.973317  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:17.973682  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:18.473432  399286 type.go:168] "Request Body" body=""
	I1206 10:50:18.473500  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:18.473770  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:18.473813  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:18.972497  399286 type.go:168] "Request Body" body=""
	I1206 10:50:18.972578  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:18.972916  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:19.472629  399286 type.go:168] "Request Body" body=""
	I1206 10:50:19.472708  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:19.473031  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:19.972542  399286 type.go:168] "Request Body" body=""
	I1206 10:50:19.972615  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:19.972928  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:20.472572  399286 type.go:168] "Request Body" body=""
	I1206 10:50:20.472675  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:20.473027  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:20.972608  399286 type.go:168] "Request Body" body=""
	I1206 10:50:20.972691  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:20.973039  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:20.973093  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:21.472736  399286 type.go:168] "Request Body" body=""
	I1206 10:50:21.472814  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:21.473165  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:21.972862  399286 type.go:168] "Request Body" body=""
	I1206 10:50:21.972940  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:21.973280  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:22.473081  399286 type.go:168] "Request Body" body=""
	I1206 10:50:22.473165  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:22.473518  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:22.973293  399286 type.go:168] "Request Body" body=""
	I1206 10:50:22.973363  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:22.973736  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:22.973785  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:23.472452  399286 type.go:168] "Request Body" body=""
	I1206 10:50:23.472536  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:23.472892  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:23.972617  399286 type.go:168] "Request Body" body=""
	I1206 10:50:23.972696  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:23.973045  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:24.472734  399286 type.go:168] "Request Body" body=""
	I1206 10:50:24.472803  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:24.473087  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:24.972771  399286 type.go:168] "Request Body" body=""
	I1206 10:50:24.972846  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:24.973213  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:25.472766  399286 type.go:168] "Request Body" body=""
	I1206 10:50:25.472842  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:25.473168  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:25.473226  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:25.972592  399286 type.go:168] "Request Body" body=""
	I1206 10:50:25.972661  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:25.972949  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:26.472514  399286 type.go:168] "Request Body" body=""
	I1206 10:50:26.472594  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:26.472931  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:26.972899  399286 type.go:168] "Request Body" body=""
	I1206 10:50:26.972973  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:26.973261  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:27.473424  399286 type.go:168] "Request Body" body=""
	I1206 10:50:27.473500  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:27.473765  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:27.473815  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:27.972509  399286 type.go:168] "Request Body" body=""
	I1206 10:50:27.972592  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:27.972936  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:28.472486  399286 type.go:168] "Request Body" body=""
	I1206 10:50:28.472562  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:28.472923  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:28.972440  399286 type.go:168] "Request Body" body=""
	I1206 10:50:28.972512  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:28.972780  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:29.472546  399286 type.go:168] "Request Body" body=""
	I1206 10:50:29.472624  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:29.472988  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:29.972566  399286 type.go:168] "Request Body" body=""
	I1206 10:50:29.972650  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:29.972976  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:29.973030  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:30.472526  399286 type.go:168] "Request Body" body=""
	I1206 10:50:30.472595  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:30.472867  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:30.972538  399286 type.go:168] "Request Body" body=""
	I1206 10:50:30.972619  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:30.972967  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:31.472532  399286 type.go:168] "Request Body" body=""
	I1206 10:50:31.472614  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:31.472943  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:31.972822  399286 type.go:168] "Request Body" body=""
	I1206 10:50:31.972898  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:31.973163  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:31.973204  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:32.472846  399286 type.go:168] "Request Body" body=""
	I1206 10:50:32.472938  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:32.473300  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:32.973154  399286 type.go:168] "Request Body" body=""
	I1206 10:50:32.973228  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:32.973551  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:33.473237  399286 type.go:168] "Request Body" body=""
	I1206 10:50:33.473313  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:33.473581  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:33.973393  399286 type.go:168] "Request Body" body=""
	I1206 10:50:33.973465  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:33.973800  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:33.973854  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:34.472558  399286 type.go:168] "Request Body" body=""
	I1206 10:50:34.472637  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:34.472972  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:34.972519  399286 type.go:168] "Request Body" body=""
	I1206 10:50:34.972599  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:34.972924  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:35.472616  399286 type.go:168] "Request Body" body=""
	I1206 10:50:35.472695  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:35.473014  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:35.972579  399286 type.go:168] "Request Body" body=""
	I1206 10:50:35.972655  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:35.973034  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:36.472776  399286 type.go:168] "Request Body" body=""
	I1206 10:50:36.472844  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:36.473149  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:36.473213  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:36.972912  399286 type.go:168] "Request Body" body=""
	I1206 10:50:36.972989  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:36.973334  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:37.473153  399286 type.go:168] "Request Body" body=""
	I1206 10:50:37.473234  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:37.473545  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:37.973320  399286 type.go:168] "Request Body" body=""
	I1206 10:50:37.973389  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:37.973719  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:38.473508  399286 type.go:168] "Request Body" body=""
	I1206 10:50:38.473585  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:38.473917  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:38.473974  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:38.972671  399286 type.go:168] "Request Body" body=""
	I1206 10:50:38.972749  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:38.973130  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:39.472813  399286 type.go:168] "Request Body" body=""
	I1206 10:50:39.472890  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:39.473190  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:39.972557  399286 type.go:168] "Request Body" body=""
	I1206 10:50:39.972649  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:39.972986  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:40.472571  399286 type.go:168] "Request Body" body=""
	I1206 10:50:40.472658  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:40.472975  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:40.972515  399286 type.go:168] "Request Body" body=""
	I1206 10:50:40.972584  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:40.972892  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:40.972940  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:41.472601  399286 type.go:168] "Request Body" body=""
	I1206 10:50:41.472684  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:41.473063  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:41.972896  399286 type.go:168] "Request Body" body=""
	I1206 10:50:41.972981  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:41.973322  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:42.472688  399286 type.go:168] "Request Body" body=""
	I1206 10:50:42.472753  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:42.473021  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:42.972603  399286 type.go:168] "Request Body" body=""
	I1206 10:50:42.972678  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:42.973024  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:42.973077  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:43.472713  399286 type.go:168] "Request Body" body=""
	I1206 10:50:43.472795  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:43.473163  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:43.972566  399286 type.go:168] "Request Body" body=""
	I1206 10:50:43.972641  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:43.972943  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:44.472578  399286 type.go:168] "Request Body" body=""
	I1206 10:50:44.472651  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:44.472949  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:44.972603  399286 type.go:168] "Request Body" body=""
	I1206 10:50:44.972680  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:44.973055  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:44.973112  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:45.472760  399286 type.go:168] "Request Body" body=""
	I1206 10:50:45.472833  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:45.473168  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:45.972582  399286 type.go:168] "Request Body" body=""
	I1206 10:50:45.972658  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:45.972953  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:46.472685  399286 type.go:168] "Request Body" body=""
	I1206 10:50:46.472772  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:46.473240  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:46.972953  399286 type.go:168] "Request Body" body=""
	I1206 10:50:46.973034  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:46.973311  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:46.973368  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:47.473089  399286 type.go:168] "Request Body" body=""
	I1206 10:50:47.473162  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:47.473495  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:47.973337  399286 type.go:168] "Request Body" body=""
	I1206 10:50:47.973414  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:47.973765  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:48.472461  399286 type.go:168] "Request Body" body=""
	I1206 10:50:48.472532  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:48.472800  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:48.972484  399286 type.go:168] "Request Body" body=""
	I1206 10:50:48.972555  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:48.972856  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:49.472591  399286 type.go:168] "Request Body" body=""
	I1206 10:50:49.472674  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:49.472971  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:49.473017  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:49.972486  399286 type.go:168] "Request Body" body=""
	I1206 10:50:49.972555  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:49.972818  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:50.472595  399286 type.go:168] "Request Body" body=""
	I1206 10:50:50.472673  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:50.473012  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:50.972610  399286 type.go:168] "Request Body" body=""
	I1206 10:50:50.972682  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:50.973033  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:51.472648  399286 type.go:168] "Request Body" body=""
	I1206 10:50:51.472722  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:51.473053  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:51.473104  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:51.972927  399286 type.go:168] "Request Body" body=""
	I1206 10:50:51.973006  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:51.973306  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:52.473170  399286 type.go:168] "Request Body" body=""
	I1206 10:50:52.473264  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:52.473614  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:52.973403  399286 type.go:168] "Request Body" body=""
	I1206 10:50:52.973483  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:52.973779  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:53.472504  399286 type.go:168] "Request Body" body=""
	I1206 10:50:53.472613  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:53.472956  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:53.972692  399286 type.go:168] "Request Body" body=""
	I1206 10:50:53.972766  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:53.973130  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:53.973190  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:54.472528  399286 type.go:168] "Request Body" body=""
	I1206 10:50:54.472607  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:54.472878  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:54.972605  399286 type.go:168] "Request Body" body=""
	I1206 10:50:54.972688  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:54.973068  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:55.472818  399286 type.go:168] "Request Body" body=""
	I1206 10:50:55.472895  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:55.473202  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:55.972520  399286 type.go:168] "Request Body" body=""
	I1206 10:50:55.972603  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:55.972935  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:56.472654  399286 type.go:168] "Request Body" body=""
	I1206 10:50:56.472729  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:56.473032  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:56.473084  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:56.972842  399286 type.go:168] "Request Body" body=""
	I1206 10:50:56.972920  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:56.973318  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:57.473075  399286 type.go:168] "Request Body" body=""
	I1206 10:50:57.473143  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:57.473455  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:57.973290  399286 type.go:168] "Request Body" body=""
	I1206 10:50:57.973373  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:57.973726  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:58.472463  399286 type.go:168] "Request Body" body=""
	I1206 10:50:58.472542  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:58.472877  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:58.972566  399286 type.go:168] "Request Body" body=""
	I1206 10:50:58.972641  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:58.972980  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:58.973033  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:59.472570  399286 type.go:168] "Request Body" body=""
	I1206 10:50:59.472643  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:59.472941  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:59.972571  399286 type.go:168] "Request Body" body=""
	I1206 10:50:59.972657  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:59.973000  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:00.472549  399286 type.go:168] "Request Body" body=""
	I1206 10:51:00.472645  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:00.473014  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:00.972576  399286 type.go:168] "Request Body" body=""
	I1206 10:51:00.972652  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:00.972971  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:01.472606  399286 type.go:168] "Request Body" body=""
	I1206 10:51:01.472680  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:01.473017  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:51:01.473078  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:51:01.972762  399286 type.go:168] "Request Body" body=""
	I1206 10:51:01.972832  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:01.973108  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:02.472577  399286 type.go:168] "Request Body" body=""
	I1206 10:51:02.472675  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:02.473037  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:02.972793  399286 type.go:168] "Request Body" body=""
	I1206 10:51:02.972870  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:02.973217  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:03.472909  399286 type.go:168] "Request Body" body=""
	I1206 10:51:03.472990  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:03.473316  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:51:03.473367  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:51:03.973130  399286 type.go:168] "Request Body" body=""
	I1206 10:51:03.973216  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:03.973569  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:04.473254  399286 type.go:168] "Request Body" body=""
	I1206 10:51:04.473335  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:04.473708  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:04.973449  399286 type.go:168] "Request Body" body=""
	I1206 10:51:04.973548  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:04.973831  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:05.472562  399286 type.go:168] "Request Body" body=""
	I1206 10:51:05.472640  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:05.472982  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:05.972586  399286 type.go:168] "Request Body" body=""
	I1206 10:51:05.972670  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:05.973021  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:51:05.973092  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:51:06.472600  399286 type.go:168] "Request Body" body=""
	I1206 10:51:06.472689  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:06.473022  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:06.972909  399286 type.go:168] "Request Body" body=""
	I1206 10:51:06.972998  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:06.973336  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:07.473123  399286 type.go:168] "Request Body" body=""
	I1206 10:51:07.473186  399286 node_ready.go:38] duration metric: took 6m0.000853216s for node "functional-196950" to be "Ready" ...
	I1206 10:51:07.476374  399286 out.go:203] 
	W1206 10:51:07.479349  399286 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1206 10:51:07.479391  399286 out.go:285] * 
	* 
	W1206 10:51:07.481554  399286 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:51:07.484691  399286 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-arm64 start -p functional-196950 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m6.56678934s for "functional-196950" cluster.
I1206 10:51:08.309332  364855 config.go:182] Loaded profile config "functional-196950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-196950
helpers_test.go:243: (dbg) docker inspect functional-196950:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1",
	        "Created": "2025-12-06T10:36:45.201779678Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 393848,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T10:36:45.318229053Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/hostname",
	        "HostsPath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/hosts",
	        "LogPath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1-json.log",
	        "Name": "/functional-196950",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-196950:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-196950",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1",
	                "LowerDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1-init/diff:/var/lib/docker/overlay2/5011226d55616c9977b14c1fe617d1302fe59373df05ce8ec6e21b79143a1c57/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-196950",
	                "Source": "/var/lib/docker/volumes/functional-196950/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-196950",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-196950",
	                "name.minikube.sigs.k8s.io": "functional-196950",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9b8f961d55d7529aed7b841f2ac9f818c22ff12b8ad73f2d6bcee22656d9749a",
	            "SandboxKey": "/var/run/docker/netns/9b8f961d55d7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33158"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-196950": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:c1:40:2a:93:47",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a566bfdfd33a868cf61e5b18b36cbd55e9868f24cbb091e055ae606aeb8c6f03",
	                    "EndpointID": "452fe32bde0c42c4c35d700488ae93aeecc6c6a971ac6f1a8a492dbc4b328ed9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-196950",
	                        "d150aac7296d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-196950 -n functional-196950
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-196950 -n functional-196950: exit status 2 (336.895202ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-196950 logs -n 25: (1.079284188s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-205266 image ls                                                                                                                                │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh            │ functional-205266 ssh sudo cat /usr/share/ca-certificates/364855.pem                                                                                      │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image          │ functional-205266 image save kicbase/echo-server:functional-205266 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh            │ functional-205266 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image          │ functional-205266 image rm kicbase/echo-server:functional-205266 --alsologtostderr                                                                        │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh            │ functional-205266 ssh sudo cat /etc/ssl/certs/3648552.pem                                                                                                 │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image          │ functional-205266 image ls                                                                                                                                │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh            │ functional-205266 ssh sudo cat /usr/share/ca-certificates/3648552.pem                                                                                     │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image          │ functional-205266 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh            │ functional-205266 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image          │ functional-205266 image ls                                                                                                                                │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image          │ functional-205266 image save --daemon kicbase/echo-server:functional-205266 --alsologtostderr                                                             │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ update-context │ functional-205266 update-context --alsologtostderr -v=2                                                                                                   │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ update-context │ functional-205266 update-context --alsologtostderr -v=2                                                                                                   │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ update-context │ functional-205266 update-context --alsologtostderr -v=2                                                                                                   │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image          │ functional-205266 image ls --format short --alsologtostderr                                                                                               │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image          │ functional-205266 image ls --format yaml --alsologtostderr                                                                                                │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh            │ functional-205266 ssh pgrep buildkitd                                                                                                                     │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │                     │
	│ image          │ functional-205266 image ls --format json --alsologtostderr                                                                                                │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image          │ functional-205266 image build -t localhost/my-image:functional-205266 testdata/build --alsologtostderr                                                    │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image          │ functional-205266 image ls --format table --alsologtostderr                                                                                               │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image          │ functional-205266 image ls                                                                                                                                │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ delete         │ -p functional-205266                                                                                                                                      │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ start          │ -p functional-196950 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0         │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │                     │
	│ start          │ -p functional-196950 --alsologtostderr -v=8                                                                                                               │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:45 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:45:01
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:45:01.787203  399286 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:45:01.787433  399286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:45:01.787467  399286 out.go:374] Setting ErrFile to fd 2...
	I1206 10:45:01.787489  399286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:45:01.787778  399286 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 10:45:01.788186  399286 out.go:368] Setting JSON to false
	I1206 10:45:01.789151  399286 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8853,"bootTime":1765009049,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 10:45:01.789259  399286 start.go:143] virtualization:  
	I1206 10:45:01.792729  399286 out.go:179] * [functional-196950] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:45:01.796494  399286 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 10:45:01.796574  399286 notify.go:221] Checking for updates...
	I1206 10:45:01.802323  399286 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:45:01.805290  399286 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 10:45:01.808768  399286 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-362985/.minikube
	I1206 10:45:01.811515  399286 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:45:01.814379  399286 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:45:01.817672  399286 config.go:182] Loaded profile config "functional-196950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:45:01.817798  399286 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:45:01.851887  399286 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:45:01.852009  399286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:45:01.921321  399286 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:45:01.909571102 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:45:01.921426  399286 docker.go:319] overlay module found
	I1206 10:45:01.926314  399286 out.go:179] * Using the docker driver based on existing profile
	I1206 10:45:01.929149  399286 start.go:309] selected driver: docker
	I1206 10:45:01.929174  399286 start.go:927] validating driver "docker" against &{Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:45:01.929299  399286 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:45:01.929402  399286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:45:02.005684  399286 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:45:01.991905909 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:45:02.006178  399286 cni.go:84] Creating CNI manager for ""
	I1206 10:45:02.006252  399286 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:45:02.006308  399286 start.go:353] cluster config:
	{Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:45:02.012455  399286 out.go:179] * Starting "functional-196950" primary control-plane node in "functional-196950" cluster
	I1206 10:45:02.015293  399286 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 10:45:02.018502  399286 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:45:02.021547  399286 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 10:45:02.021609  399286 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1206 10:45:02.021620  399286 cache.go:65] Caching tarball of preloaded images
	I1206 10:45:02.021746  399286 preload.go:238] Found /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1206 10:45:02.021762  399286 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1206 10:45:02.021883  399286 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/config.json ...
	I1206 10:45:02.022120  399286 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:45:02.058171  399286 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 10:45:02.058196  399286 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 10:45:02.058216  399286 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:45:02.058248  399286 start.go:360] acquireMachinesLock for functional-196950: {Name:mkd2471f275d1d2a438cb4ce89f1d1521a0fb340 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:45:02.058324  399286 start.go:364] duration metric: took 51.241µs to acquireMachinesLock for "functional-196950"
	I1206 10:45:02.058347  399286 start.go:96] Skipping create...Using existing machine configuration
	I1206 10:45:02.058352  399286 fix.go:54] fixHost starting: 
	I1206 10:45:02.058623  399286 cli_runner.go:164] Run: docker container inspect functional-196950 --format={{.State.Status}}
	I1206 10:45:02.075952  399286 fix.go:112] recreateIfNeeded on functional-196950: state=Running err=<nil>
	W1206 10:45:02.075984  399286 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 10:45:02.079219  399286 out.go:252] * Updating the running docker "functional-196950" container ...
	I1206 10:45:02.079261  399286 machine.go:94] provisionDockerMachine start ...
	I1206 10:45:02.079396  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:02.097606  399286 main.go:143] libmachine: Using SSH client type: native
	I1206 10:45:02.097945  399286 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1206 10:45:02.097963  399286 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:45:02.251117  399286 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-196950
	
	I1206 10:45:02.251145  399286 ubuntu.go:182] provisioning hostname "functional-196950"
	I1206 10:45:02.251226  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:02.270896  399286 main.go:143] libmachine: Using SSH client type: native
	I1206 10:45:02.271293  399286 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1206 10:45:02.271357  399286 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-196950 && echo "functional-196950" | sudo tee /etc/hostname
	I1206 10:45:02.434988  399286 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-196950
	
	I1206 10:45:02.435098  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:02.453713  399286 main.go:143] libmachine: Using SSH client type: native
	I1206 10:45:02.454033  399286 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1206 10:45:02.454055  399286 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-196950' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-196950/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-196950' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:45:02.607868  399286 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:45:02.607903  399286 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-362985/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-362985/.minikube}
	I1206 10:45:02.607940  399286 ubuntu.go:190] setting up certificates
	I1206 10:45:02.607949  399286 provision.go:84] configureAuth start
	I1206 10:45:02.608015  399286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-196950
	I1206 10:45:02.626134  399286 provision.go:143] copyHostCerts
	I1206 10:45:02.626186  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem
	I1206 10:45:02.626227  399286 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem, removing ...
	I1206 10:45:02.626247  399286 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem
	I1206 10:45:02.626323  399286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem (1082 bytes)
	I1206 10:45:02.626456  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem
	I1206 10:45:02.626477  399286 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem, removing ...
	I1206 10:45:02.626487  399286 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem
	I1206 10:45:02.626523  399286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem (1123 bytes)
	I1206 10:45:02.626584  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem
	I1206 10:45:02.626607  399286 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem, removing ...
	I1206 10:45:02.626611  399286 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem
	I1206 10:45:02.626634  399286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem (1679 bytes)
	I1206 10:45:02.626683  399286 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem org=jenkins.functional-196950 san=[127.0.0.1 192.168.49.2 functional-196950 localhost minikube]
	I1206 10:45:02.961448  399286 provision.go:177] copyRemoteCerts
	I1206 10:45:02.961531  399286 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:45:02.961575  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:02.978755  399286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:45:03.095893  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1206 10:45:03.095982  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 10:45:03.114611  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1206 10:45:03.114706  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 10:45:03.135133  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1206 10:45:03.135195  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 10:45:03.153562  399286 provision.go:87] duration metric: took 545.588133ms to configureAuth
	I1206 10:45:03.153601  399286 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:45:03.153843  399286 config.go:182] Loaded profile config "functional-196950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:45:03.153992  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:03.171946  399286 main.go:143] libmachine: Using SSH client type: native
	I1206 10:45:03.172256  399286 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1206 10:45:03.172279  399286 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 10:45:03.524489  399286 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 10:45:03.524512  399286 machine.go:97] duration metric: took 1.445242076s to provisionDockerMachine
	I1206 10:45:03.524523  399286 start.go:293] postStartSetup for "functional-196950" (driver="docker")
	I1206 10:45:03.524536  399286 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:45:03.524603  399286 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:45:03.524644  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:03.555449  399286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:45:03.668233  399286 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:45:03.672046  399286 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1206 10:45:03.672068  399286 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1206 10:45:03.672073  399286 command_runner.go:130] > VERSION_ID="12"
	I1206 10:45:03.672078  399286 command_runner.go:130] > VERSION="12 (bookworm)"
	I1206 10:45:03.672084  399286 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1206 10:45:03.672087  399286 command_runner.go:130] > ID=debian
	I1206 10:45:03.672092  399286 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1206 10:45:03.672114  399286 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1206 10:45:03.672130  399286 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1206 10:45:03.672206  399286 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:45:03.672228  399286 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:45:03.672240  399286 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-362985/.minikube/addons for local assets ...
	I1206 10:45:03.672300  399286 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-362985/.minikube/files for local assets ...
	I1206 10:45:03.672390  399286 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem -> 3648552.pem in /etc/ssl/certs
	I1206 10:45:03.672402  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem -> /etc/ssl/certs/3648552.pem
	I1206 10:45:03.672481  399286 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/test/nested/copy/364855/hosts -> hosts in /etc/test/nested/copy/364855
	I1206 10:45:03.672489  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/test/nested/copy/364855/hosts -> /etc/test/nested/copy/364855/hosts
	I1206 10:45:03.672536  399286 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/364855
	I1206 10:45:03.681376  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem --> /etc/ssl/certs/3648552.pem (1708 bytes)
	I1206 10:45:03.700845  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/test/nested/copy/364855/hosts --> /etc/test/nested/copy/364855/hosts (40 bytes)
	I1206 10:45:03.720695  399286 start.go:296] duration metric: took 196.153156ms for postStartSetup
	I1206 10:45:03.720782  399286 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:45:03.720851  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:03.739871  399286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:45:03.844136  399286 command_runner.go:130] > 11%
	I1206 10:45:03.844709  399286 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:45:03.849387  399286 command_runner.go:130] > 174G
	I1206 10:45:03.849978  399286 fix.go:56] duration metric: took 1.791620292s for fixHost
	I1206 10:45:03.850000  399286 start.go:83] releasing machines lock for "functional-196950", held for 1.791664797s
	I1206 10:45:03.850077  399286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-196950
	I1206 10:45:03.867785  399286 ssh_runner.go:195] Run: cat /version.json
	I1206 10:45:03.867838  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:03.868113  399286 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:45:03.868167  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:03.886546  399286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:45:03.911694  399286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:45:03.991370  399286 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764843390-22032", "minikube_version": "v1.37.0", "commit": "d7bfd7d6d80c3eeb1d6cf1c5f081f8642bc1997e"}
	I1206 10:45:03.991537  399286 ssh_runner.go:195] Run: systemctl --version
	I1206 10:45:04.088215  399286 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1206 10:45:04.091250  399286 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1206 10:45:04.091291  399286 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1206 10:45:04.091431  399286 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 10:45:04.130964  399286 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1206 10:45:04.136249  399286 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1206 10:45:04.136293  399286 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:45:04.136352  399286 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:45:04.145113  399286 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 10:45:04.145182  399286 start.go:496] detecting cgroup driver to use...
	I1206 10:45:04.145222  399286 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:45:04.145282  399286 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 10:45:04.161420  399286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 10:45:04.175205  399286 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:45:04.175315  399286 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:45:04.191496  399286 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:45:04.205243  399286 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:45:04.349911  399286 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:45:04.470887  399286 docker.go:234] disabling docker service ...
	I1206 10:45:04.471006  399286 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:45:04.486933  399286 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:45:04.500707  399286 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:45:04.632842  399286 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:45:04.756279  399286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:45:04.770461  399286 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:45:04.785365  399286 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1206 10:45:04.786482  399286 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 10:45:04.786596  399286 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:45:04.796852  399286 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 10:45:04.796980  399286 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:45:04.806654  399286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:45:04.816002  399286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:45:04.825576  399286 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:45:04.834547  399286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:45:04.844889  399286 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:45:04.854032  399286 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:45:04.863103  399286 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:45:04.870297  399286 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1206 10:45:04.871475  399286 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:45:04.879247  399286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:45:04.992959  399286 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 10:45:05.192927  399286 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 10:45:05.193085  399286 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 10:45:05.197937  399286 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1206 10:45:05.197964  399286 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1206 10:45:05.197971  399286 command_runner.go:130] > Device: 0,72	Inode: 1640        Links: 1
	I1206 10:45:05.197987  399286 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 10:45:05.198031  399286 command_runner.go:130] > Access: 2025-12-06 10:45:05.125759427 +0000
	I1206 10:45:05.198049  399286 command_runner.go:130] > Modify: 2025-12-06 10:45:05.125759427 +0000
	I1206 10:45:05.198060  399286 command_runner.go:130] > Change: 2025-12-06 10:45:05.125759427 +0000
	I1206 10:45:05.198063  399286 command_runner.go:130] >  Birth: -
	I1206 10:45:05.198081  399286 start.go:564] Will wait 60s for crictl version
	I1206 10:45:05.198158  399286 ssh_runner.go:195] Run: which crictl
	I1206 10:45:05.202333  399286 command_runner.go:130] > /usr/local/bin/crictl
	I1206 10:45:05.202451  399286 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:45:05.227773  399286 command_runner.go:130] > Version:  0.1.0
	I1206 10:45:05.227855  399286 command_runner.go:130] > RuntimeName:  cri-o
	I1206 10:45:05.227876  399286 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1206 10:45:05.227895  399286 command_runner.go:130] > RuntimeApiVersion:  v1
	I1206 10:45:05.230308  399286 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 10:45:05.230460  399286 ssh_runner.go:195] Run: crio --version
	I1206 10:45:05.261871  399286 command_runner.go:130] > crio version 1.34.3
	I1206 10:45:05.261971  399286 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1206 10:45:05.261992  399286 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1206 10:45:05.262014  399286 command_runner.go:130] >    GitTreeState:   dirty
	I1206 10:45:05.262045  399286 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1206 10:45:05.262062  399286 command_runner.go:130] >    GoVersion:      go1.24.6
	I1206 10:45:05.262083  399286 command_runner.go:130] >    Compiler:       gc
	I1206 10:45:05.262102  399286 command_runner.go:130] >    Platform:       linux/arm64
	I1206 10:45:05.262141  399286 command_runner.go:130] >    Linkmode:       static
	I1206 10:45:05.262176  399286 command_runner.go:130] >    BuildTags:
	I1206 10:45:05.262192  399286 command_runner.go:130] >      static
	I1206 10:45:05.262229  399286 command_runner.go:130] >      netgo
	I1206 10:45:05.262248  399286 command_runner.go:130] >      osusergo
	I1206 10:45:05.262264  399286 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1206 10:45:05.262283  399286 command_runner.go:130] >      seccomp
	I1206 10:45:05.262317  399286 command_runner.go:130] >      apparmor
	I1206 10:45:05.262335  399286 command_runner.go:130] >      selinux
	I1206 10:45:05.262352  399286 command_runner.go:130] >    LDFlags:          unknown
	I1206 10:45:05.262371  399286 command_runner.go:130] >    SeccompEnabled:   true
	I1206 10:45:05.262402  399286 command_runner.go:130] >    AppArmorEnabled:  false
	I1206 10:45:05.263735  399286 ssh_runner.go:195] Run: crio --version
	I1206 10:45:05.292275  399286 command_runner.go:130] > crio version 1.34.3
	I1206 10:45:05.292350  399286 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1206 10:45:05.292370  399286 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1206 10:45:05.292389  399286 command_runner.go:130] >    GitTreeState:   dirty
	I1206 10:45:05.292419  399286 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1206 10:45:05.292445  399286 command_runner.go:130] >    GoVersion:      go1.24.6
	I1206 10:45:05.292464  399286 command_runner.go:130] >    Compiler:       gc
	I1206 10:45:05.292484  399286 command_runner.go:130] >    Platform:       linux/arm64
	I1206 10:45:05.292510  399286 command_runner.go:130] >    Linkmode:       static
	I1206 10:45:05.292529  399286 command_runner.go:130] >    BuildTags:
	I1206 10:45:05.292548  399286 command_runner.go:130] >      static
	I1206 10:45:05.292577  399286 command_runner.go:130] >      netgo
	I1206 10:45:05.292594  399286 command_runner.go:130] >      osusergo
	I1206 10:45:05.292622  399286 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1206 10:45:05.292652  399286 command_runner.go:130] >      seccomp
	I1206 10:45:05.292669  399286 command_runner.go:130] >      apparmor
	I1206 10:45:05.292692  399286 command_runner.go:130] >      selinux
	I1206 10:45:05.292731  399286 command_runner.go:130] >    LDFlags:          unknown
	I1206 10:45:05.292749  399286 command_runner.go:130] >    SeccompEnabled:   true
	I1206 10:45:05.292767  399286 command_runner.go:130] >    AppArmorEnabled:  false
	I1206 10:45:05.300434  399286 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1206 10:45:05.303425  399286 cli_runner.go:164] Run: docker network inspect functional-196950 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:45:05.320718  399286 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 10:45:05.324954  399286 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1206 10:45:05.325142  399286 kubeadm.go:884] updating cluster {Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:45:05.325270  399286 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 10:45:05.325346  399286 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:45:05.356177  399286 command_runner.go:130] > {
	I1206 10:45:05.356195  399286 command_runner.go:130] >   "images":  [
	I1206 10:45:05.356199  399286 command_runner.go:130] >     {
	I1206 10:45:05.356208  399286 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1206 10:45:05.356213  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.356218  399286 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1206 10:45:05.356222  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356226  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.356235  399286 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1206 10:45:05.356243  399286 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1206 10:45:05.356246  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356251  399286 command_runner.go:130] >       "size":  "111333938",
	I1206 10:45:05.356254  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.356259  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.356262  399286 command_runner.go:130] >     },
	I1206 10:45:05.356265  399286 command_runner.go:130] >     {
	I1206 10:45:05.356272  399286 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1206 10:45:05.356285  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.356291  399286 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1206 10:45:05.356294  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356298  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.356307  399286 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1206 10:45:05.356315  399286 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1206 10:45:05.356318  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356322  399286 command_runner.go:130] >       "size":  "29037500",
	I1206 10:45:05.356326  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.356334  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.356337  399286 command_runner.go:130] >     },
	I1206 10:45:05.356340  399286 command_runner.go:130] >     {
	I1206 10:45:05.356346  399286 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1206 10:45:05.356350  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.356355  399286 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1206 10:45:05.356358  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356362  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.356369  399286 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1206 10:45:05.356377  399286 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1206 10:45:05.356380  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356385  399286 command_runner.go:130] >       "size":  "74491780",
	I1206 10:45:05.356389  399286 command_runner.go:130] >       "username":  "nonroot",
	I1206 10:45:05.356393  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.356396  399286 command_runner.go:130] >     },
	I1206 10:45:05.356399  399286 command_runner.go:130] >     {
	I1206 10:45:05.356405  399286 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1206 10:45:05.356409  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.356428  399286 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1206 10:45:05.356433  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356438  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.356446  399286 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1206 10:45:05.356453  399286 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1206 10:45:05.356457  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356465  399286 command_runner.go:130] >       "size":  "60857170",
	I1206 10:45:05.356469  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.356472  399286 command_runner.go:130] >         "value":  "0"
	I1206 10:45:05.356475  399286 command_runner.go:130] >       },
	I1206 10:45:05.356488  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.356492  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.356495  399286 command_runner.go:130] >     },
	I1206 10:45:05.356498  399286 command_runner.go:130] >     {
	I1206 10:45:05.356505  399286 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1206 10:45:05.356508  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.356513  399286 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1206 10:45:05.356516  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356520  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.356528  399286 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1206 10:45:05.356536  399286 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1206 10:45:05.356539  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356543  399286 command_runner.go:130] >       "size":  "84949999",
	I1206 10:45:05.356546  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.356550  399286 command_runner.go:130] >         "value":  "0"
	I1206 10:45:05.356553  399286 command_runner.go:130] >       },
	I1206 10:45:05.356557  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.356561  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.356564  399286 command_runner.go:130] >     },
	I1206 10:45:05.356567  399286 command_runner.go:130] >     {
	I1206 10:45:05.356573  399286 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1206 10:45:05.356577  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.356583  399286 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1206 10:45:05.356586  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356590  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.356598  399286 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1206 10:45:05.356606  399286 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1206 10:45:05.356609  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356617  399286 command_runner.go:130] >       "size":  "72170325",
	I1206 10:45:05.356623  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.356627  399286 command_runner.go:130] >         "value":  "0"
	I1206 10:45:05.356631  399286 command_runner.go:130] >       },
	I1206 10:45:05.356634  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.356638  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.356641  399286 command_runner.go:130] >     },
	I1206 10:45:05.356643  399286 command_runner.go:130] >     {
	I1206 10:45:05.356650  399286 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1206 10:45:05.356654  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.356659  399286 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1206 10:45:05.356662  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356666  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.356674  399286 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1206 10:45:05.356681  399286 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1206 10:45:05.356684  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356688  399286 command_runner.go:130] >       "size":  "74106775",
	I1206 10:45:05.356692  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.356695  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.356698  399286 command_runner.go:130] >     },
	I1206 10:45:05.356701  399286 command_runner.go:130] >     {
	I1206 10:45:05.356708  399286 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1206 10:45:05.356711  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.356716  399286 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1206 10:45:05.356719  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356723  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.356730  399286 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1206 10:45:05.356747  399286 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1206 10:45:05.356751  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356755  399286 command_runner.go:130] >       "size":  "49822549",
	I1206 10:45:05.356759  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.356763  399286 command_runner.go:130] >         "value":  "0"
	I1206 10:45:05.356766  399286 command_runner.go:130] >       },
	I1206 10:45:05.356770  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.356778  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.356781  399286 command_runner.go:130] >     },
	I1206 10:45:05.356784  399286 command_runner.go:130] >     {
	I1206 10:45:05.356790  399286 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1206 10:45:05.356794  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.356798  399286 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1206 10:45:05.356801  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356805  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.356812  399286 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1206 10:45:05.356820  399286 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1206 10:45:05.356823  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356826  399286 command_runner.go:130] >       "size":  "519884",
	I1206 10:45:05.356830  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.356833  399286 command_runner.go:130] >         "value":  "65535"
	I1206 10:45:05.356836  399286 command_runner.go:130] >       },
	I1206 10:45:05.356840  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.356843  399286 command_runner.go:130] >       "pinned":  true
	I1206 10:45:05.356850  399286 command_runner.go:130] >     }
	I1206 10:45:05.356853  399286 command_runner.go:130] >   ]
	I1206 10:45:05.356857  399286 command_runner.go:130] > }
	I1206 10:45:05.358491  399286 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:45:05.358523  399286 crio.go:433] Images already preloaded, skipping extraction
	I1206 10:45:05.358585  399286 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:45:05.381820  399286 command_runner.go:130] > {
	I1206 10:45:05.381840  399286 command_runner.go:130] >   "images":  [
	I1206 10:45:05.381844  399286 command_runner.go:130] >     {
	I1206 10:45:05.381853  399286 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1206 10:45:05.381857  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.381864  399286 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1206 10:45:05.381867  399286 command_runner.go:130] >       ],
	I1206 10:45:05.381871  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.381880  399286 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1206 10:45:05.381888  399286 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1206 10:45:05.381892  399286 command_runner.go:130] >       ],
	I1206 10:45:05.381896  399286 command_runner.go:130] >       "size":  "111333938",
	I1206 10:45:05.381900  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.381909  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.381912  399286 command_runner.go:130] >     },
	I1206 10:45:05.381916  399286 command_runner.go:130] >     {
	I1206 10:45:05.381922  399286 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1206 10:45:05.381926  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.381932  399286 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1206 10:45:05.381935  399286 command_runner.go:130] >       ],
	I1206 10:45:05.381939  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.381947  399286 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1206 10:45:05.381956  399286 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1206 10:45:05.381959  399286 command_runner.go:130] >       ],
	I1206 10:45:05.381963  399286 command_runner.go:130] >       "size":  "29037500",
	I1206 10:45:05.381967  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.381973  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.381977  399286 command_runner.go:130] >     },
	I1206 10:45:05.381980  399286 command_runner.go:130] >     {
	I1206 10:45:05.381987  399286 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1206 10:45:05.381990  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.381999  399286 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1206 10:45:05.382003  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382007  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.382014  399286 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1206 10:45:05.382022  399286 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1206 10:45:05.382025  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382029  399286 command_runner.go:130] >       "size":  "74491780",
	I1206 10:45:05.382033  399286 command_runner.go:130] >       "username":  "nonroot",
	I1206 10:45:05.382037  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.382040  399286 command_runner.go:130] >     },
	I1206 10:45:05.382043  399286 command_runner.go:130] >     {
	I1206 10:45:05.382049  399286 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1206 10:45:05.382053  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.382058  399286 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1206 10:45:05.382063  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382067  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.382074  399286 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1206 10:45:05.382082  399286 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1206 10:45:05.382085  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382089  399286 command_runner.go:130] >       "size":  "60857170",
	I1206 10:45:05.382093  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.382096  399286 command_runner.go:130] >         "value":  "0"
	I1206 10:45:05.382100  399286 command_runner.go:130] >       },
	I1206 10:45:05.382398  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.382411  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.382415  399286 command_runner.go:130] >     },
	I1206 10:45:05.382419  399286 command_runner.go:130] >     {
	I1206 10:45:05.382427  399286 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1206 10:45:05.382437  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.382443  399286 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1206 10:45:05.382446  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382450  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.382463  399286 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1206 10:45:05.382476  399286 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1206 10:45:05.382479  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382484  399286 command_runner.go:130] >       "size":  "84949999",
	I1206 10:45:05.382492  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.382495  399286 command_runner.go:130] >         "value":  "0"
	I1206 10:45:05.382499  399286 command_runner.go:130] >       },
	I1206 10:45:05.382503  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.382507  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.382510  399286 command_runner.go:130] >     },
	I1206 10:45:05.382514  399286 command_runner.go:130] >     {
	I1206 10:45:05.382524  399286 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1206 10:45:05.382528  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.382534  399286 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1206 10:45:05.382541  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382546  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.382555  399286 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1206 10:45:05.382568  399286 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1206 10:45:05.382571  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382575  399286 command_runner.go:130] >       "size":  "72170325",
	I1206 10:45:05.382579  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.382583  399286 command_runner.go:130] >         "value":  "0"
	I1206 10:45:05.382590  399286 command_runner.go:130] >       },
	I1206 10:45:05.382594  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.382597  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.382601  399286 command_runner.go:130] >     },
	I1206 10:45:05.382604  399286 command_runner.go:130] >     {
	I1206 10:45:05.382615  399286 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1206 10:45:05.382618  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.382624  399286 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1206 10:45:05.382627  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382631  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.382643  399286 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1206 10:45:05.382651  399286 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1206 10:45:05.382658  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382666  399286 command_runner.go:130] >       "size":  "74106775",
	I1206 10:45:05.382672  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.382676  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.382679  399286 command_runner.go:130] >     },
	I1206 10:45:05.382682  399286 command_runner.go:130] >     {
	I1206 10:45:05.382693  399286 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1206 10:45:05.382697  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.382702  399286 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1206 10:45:05.382706  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382710  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.382722  399286 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1206 10:45:05.382745  399286 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1206 10:45:05.382753  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382757  399286 command_runner.go:130] >       "size":  "49822549",
	I1206 10:45:05.382761  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.382765  399286 command_runner.go:130] >         "value":  "0"
	I1206 10:45:05.382768  399286 command_runner.go:130] >       },
	I1206 10:45:05.382772  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.382780  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.382783  399286 command_runner.go:130] >     },
	I1206 10:45:05.382786  399286 command_runner.go:130] >     {
	I1206 10:45:05.382793  399286 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1206 10:45:05.382797  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.382805  399286 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1206 10:45:05.382808  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382812  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.382820  399286 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1206 10:45:05.382832  399286 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1206 10:45:05.382835  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382839  399286 command_runner.go:130] >       "size":  "519884",
	I1206 10:45:05.382843  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.382847  399286 command_runner.go:130] >         "value":  "65535"
	I1206 10:45:05.382857  399286 command_runner.go:130] >       },
	I1206 10:45:05.382861  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.382865  399286 command_runner.go:130] >       "pinned":  true
	I1206 10:45:05.382868  399286 command_runner.go:130] >     }
	I1206 10:45:05.382871  399286 command_runner.go:130] >   ]
	I1206 10:45:05.382874  399286 command_runner.go:130] > }
	I1206 10:45:05.396183  399286 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:45:05.396208  399286 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:45:05.396219  399286 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1206 10:45:05.396325  399286 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-196950 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:45:05.396421  399286 ssh_runner.go:195] Run: crio config
	I1206 10:45:05.425462  399286 command_runner.go:130] ! time="2025-12-06T10:45:05.425119459Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1206 10:45:05.425532  399286 command_runner.go:130] ! time="2025-12-06T10:45:05.425157991Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1206 10:45:05.425754  399286 command_runner.go:130] ! time="2025-12-06T10:45:05.425195308Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1206 10:45:05.425797  399286 command_runner.go:130] ! time="2025-12-06T10:45:05.42522017Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1206 10:45:05.425982  399286 command_runner.go:130] ! time="2025-12-06T10:45:05.425299687Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:45:05.426160  399286 command_runner.go:130] ! time="2025-12-06T10:45:05.42561672Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1206 10:45:05.442529  399286 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1206 10:45:05.470811  399286 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1206 10:45:05.470887  399286 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1206 10:45:05.470910  399286 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1206 10:45:05.470925  399286 command_runner.go:130] > #
	I1206 10:45:05.470961  399286 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1206 10:45:05.470990  399286 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1206 10:45:05.471012  399286 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1206 10:45:05.471037  399286 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1206 10:45:05.471066  399286 command_runner.go:130] > # reload'.
	I1206 10:45:05.471089  399286 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1206 10:45:05.471110  399286 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1206 10:45:05.471132  399286 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1206 10:45:05.471165  399286 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1206 10:45:05.471189  399286 command_runner.go:130] > [crio]
	I1206 10:45:05.471211  399286 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1206 10:45:05.471233  399286 command_runner.go:130] > # containers images, in this directory.
	I1206 10:45:05.471266  399286 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1206 10:45:05.471291  399286 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1206 10:45:05.471336  399286 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1206 10:45:05.471369  399286 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1206 10:45:05.471416  399286 command_runner.go:130] > # imagestore = ""
	I1206 10:45:05.471447  399286 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1206 10:45:05.471467  399286 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1206 10:45:05.471498  399286 command_runner.go:130] > # storage_driver = "overlay"
	I1206 10:45:05.471527  399286 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1206 10:45:05.471540  399286 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1206 10:45:05.471544  399286 command_runner.go:130] > # storage_option = [
	I1206 10:45:05.471548  399286 command_runner.go:130] > # ]
	I1206 10:45:05.471554  399286 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1206 10:45:05.471561  399286 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1206 10:45:05.471566  399286 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1206 10:45:05.471572  399286 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1206 10:45:05.471584  399286 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1206 10:45:05.471601  399286 command_runner.go:130] > # always happen on a node reboot
	I1206 10:45:05.471614  399286 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1206 10:45:05.471624  399286 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1206 10:45:05.471631  399286 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1206 10:45:05.471647  399286 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1206 10:45:05.471665  399286 command_runner.go:130] > # version_file_persist = ""
	I1206 10:45:05.471674  399286 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1206 10:45:05.471685  399286 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1206 10:45:05.471689  399286 command_runner.go:130] > # internal_wipe = true
	I1206 10:45:05.471701  399286 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1206 10:45:05.471736  399286 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1206 10:45:05.471747  399286 command_runner.go:130] > # internal_repair = true
	I1206 10:45:05.471753  399286 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1206 10:45:05.471760  399286 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1206 10:45:05.471768  399286 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1206 10:45:05.471774  399286 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1206 10:45:05.471790  399286 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1206 10:45:05.471793  399286 command_runner.go:130] > [crio.api]
	I1206 10:45:05.471799  399286 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1206 10:45:05.471810  399286 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1206 10:45:05.471817  399286 command_runner.go:130] > # IP address on which the stream server will listen.
	I1206 10:45:05.471822  399286 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1206 10:45:05.471829  399286 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1206 10:45:05.471837  399286 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1206 10:45:05.471841  399286 command_runner.go:130] > # stream_port = "0"
	I1206 10:45:05.471852  399286 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1206 10:45:05.471856  399286 command_runner.go:130] > # stream_enable_tls = false
	I1206 10:45:05.471867  399286 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1206 10:45:05.471871  399286 command_runner.go:130] > # stream_idle_timeout = ""
	I1206 10:45:05.471891  399286 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1206 10:45:05.471897  399286 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1206 10:45:05.471905  399286 command_runner.go:130] > # stream_tls_cert = ""
	I1206 10:45:05.471912  399286 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1206 10:45:05.471918  399286 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1206 10:45:05.471922  399286 command_runner.go:130] > # stream_tls_key = ""
	I1206 10:45:05.471928  399286 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1206 10:45:05.471937  399286 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1206 10:45:05.471942  399286 command_runner.go:130] > # automatically pick up the changes.
	I1206 10:45:05.471950  399286 command_runner.go:130] > # stream_tls_ca = ""
	I1206 10:45:05.471981  399286 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1206 10:45:05.471991  399286 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1206 10:45:05.471999  399286 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1206 10:45:05.472004  399286 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1206 10:45:05.472010  399286 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1206 10:45:05.472018  399286 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1206 10:45:05.472022  399286 command_runner.go:130] > [crio.runtime]
	I1206 10:45:05.472029  399286 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1206 10:45:05.472036  399286 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1206 10:45:05.472041  399286 command_runner.go:130] > # "nofile=1024:2048"
	I1206 10:45:05.472057  399286 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1206 10:45:05.472061  399286 command_runner.go:130] > # default_ulimits = [
	I1206 10:45:05.472064  399286 command_runner.go:130] > # ]
	I1206 10:45:05.472070  399286 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1206 10:45:05.472077  399286 command_runner.go:130] > # no_pivot = false
	I1206 10:45:05.472083  399286 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1206 10:45:05.472090  399286 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1206 10:45:05.472095  399286 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1206 10:45:05.472103  399286 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1206 10:45:05.472108  399286 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1206 10:45:05.472117  399286 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1206 10:45:05.472123  399286 command_runner.go:130] > # conmon = ""
	I1206 10:45:05.472127  399286 command_runner.go:130] > # Cgroup setting for conmon
	I1206 10:45:05.472137  399286 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1206 10:45:05.472143  399286 command_runner.go:130] > conmon_cgroup = "pod"
	I1206 10:45:05.472152  399286 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1206 10:45:05.472157  399286 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1206 10:45:05.472164  399286 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1206 10:45:05.472168  399286 command_runner.go:130] > # conmon_env = [
	I1206 10:45:05.472173  399286 command_runner.go:130] > # ]
	I1206 10:45:05.472180  399286 command_runner.go:130] > # Additional environment variables to set for all the
	I1206 10:45:05.472188  399286 command_runner.go:130] > # containers. These are overridden if set in the
	I1206 10:45:05.472198  399286 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1206 10:45:05.472204  399286 command_runner.go:130] > # default_env = [
	I1206 10:45:05.472208  399286 command_runner.go:130] > # ]
	I1206 10:45:05.472213  399286 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1206 10:45:05.472223  399286 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1206 10:45:05.472229  399286 command_runner.go:130] > # selinux = false
	I1206 10:45:05.472236  399286 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1206 10:45:05.472246  399286 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1206 10:45:05.472252  399286 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:45:05.472255  399286 command_runner.go:130] > # seccomp_profile = ""
	I1206 10:45:05.472262  399286 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1206 10:45:05.472270  399286 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:45:05.472274  399286 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1206 10:45:05.472281  399286 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1206 10:45:05.472287  399286 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1206 10:45:05.472295  399286 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1206 10:45:05.472302  399286 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1206 10:45:05.472315  399286 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:45:05.472320  399286 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1206 10:45:05.472326  399286 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1206 10:45:05.472330  399286 command_runner.go:130] > # the cgroup blockio controller.
	I1206 10:45:05.472337  399286 command_runner.go:130] > # blockio_config_file = ""
	I1206 10:45:05.472345  399286 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1206 10:45:05.472353  399286 command_runner.go:130] > # blockio parameters.
	I1206 10:45:05.472357  399286 command_runner.go:130] > # blockio_reload = false
	I1206 10:45:05.472364  399286 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1206 10:45:05.472367  399286 command_runner.go:130] > # irqbalance daemon.
	I1206 10:45:05.472373  399286 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1206 10:45:05.472381  399286 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1206 10:45:05.472391  399286 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1206 10:45:05.472412  399286 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1206 10:45:05.472419  399286 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1206 10:45:05.472428  399286 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1206 10:45:05.472437  399286 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:45:05.472448  399286 command_runner.go:130] > # rdt_config_file = ""
	I1206 10:45:05.472455  399286 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1206 10:45:05.472459  399286 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1206 10:45:05.472465  399286 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1206 10:45:05.472472  399286 command_runner.go:130] > # separate_pull_cgroup = ""
	I1206 10:45:05.472479  399286 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1206 10:45:05.472486  399286 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1206 10:45:05.472498  399286 command_runner.go:130] > # will be added.
	I1206 10:45:05.472503  399286 command_runner.go:130] > # default_capabilities = [
	I1206 10:45:05.472506  399286 command_runner.go:130] > # 	"CHOWN",
	I1206 10:45:05.472510  399286 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1206 10:45:05.472520  399286 command_runner.go:130] > # 	"FSETID",
	I1206 10:45:05.472525  399286 command_runner.go:130] > # 	"FOWNER",
	I1206 10:45:05.472529  399286 command_runner.go:130] > # 	"SETGID",
	I1206 10:45:05.472539  399286 command_runner.go:130] > # 	"SETUID",
	I1206 10:45:05.472558  399286 command_runner.go:130] > # 	"SETPCAP",
	I1206 10:45:05.472573  399286 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1206 10:45:05.472576  399286 command_runner.go:130] > # 	"KILL",
	I1206 10:45:05.472579  399286 command_runner.go:130] > # ]
	I1206 10:45:05.472587  399286 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1206 10:45:05.472602  399286 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1206 10:45:05.472607  399286 command_runner.go:130] > # add_inheritable_capabilities = false
	I1206 10:45:05.472616  399286 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1206 10:45:05.472628  399286 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1206 10:45:05.472632  399286 command_runner.go:130] > default_sysctls = [
	I1206 10:45:05.472637  399286 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1206 10:45:05.472643  399286 command_runner.go:130] > ]
	I1206 10:45:05.472650  399286 command_runner.go:130] > # List of devices on the host that a
	I1206 10:45:05.472660  399286 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1206 10:45:05.472664  399286 command_runner.go:130] > # allowed_devices = [
	I1206 10:45:05.472670  399286 command_runner.go:130] > # 	"/dev/fuse",
	I1206 10:45:05.472674  399286 command_runner.go:130] > # 	"/dev/net/tun",
	I1206 10:45:05.472681  399286 command_runner.go:130] > # ]
	I1206 10:45:05.472689  399286 command_runner.go:130] > # List of additional devices. specified as
	I1206 10:45:05.472697  399286 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1206 10:45:05.472703  399286 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1206 10:45:05.472711  399286 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1206 10:45:05.472716  399286 command_runner.go:130] > # additional_devices = [
	I1206 10:45:05.472722  399286 command_runner.go:130] > # ]
	I1206 10:45:05.472730  399286 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1206 10:45:05.472737  399286 command_runner.go:130] > # cdi_spec_dirs = [
	I1206 10:45:05.472743  399286 command_runner.go:130] > # 	"/etc/cdi",
	I1206 10:45:05.472747  399286 command_runner.go:130] > # 	"/var/run/cdi",
	I1206 10:45:05.472750  399286 command_runner.go:130] > # ]
	I1206 10:45:05.472757  399286 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1206 10:45:05.472766  399286 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1206 10:45:05.472770  399286 command_runner.go:130] > # Defaults to false.
	I1206 10:45:05.472775  399286 command_runner.go:130] > # device_ownership_from_security_context = false
	I1206 10:45:05.472782  399286 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1206 10:45:05.472791  399286 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1206 10:45:05.472795  399286 command_runner.go:130] > # hooks_dir = [
	I1206 10:45:05.472800  399286 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1206 10:45:05.472806  399286 command_runner.go:130] > # ]
	I1206 10:45:05.472813  399286 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1206 10:45:05.472819  399286 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1206 10:45:05.472827  399286 command_runner.go:130] > # its default mounts from the following two files:
	I1206 10:45:05.472830  399286 command_runner.go:130] > #
	I1206 10:45:05.472836  399286 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1206 10:45:05.472845  399286 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1206 10:45:05.472852  399286 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1206 10:45:05.472858  399286 command_runner.go:130] > #
	I1206 10:45:05.472865  399286 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1206 10:45:05.472871  399286 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1206 10:45:05.472878  399286 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1206 10:45:05.472887  399286 command_runner.go:130] > #      only add mounts it finds in this file.
	I1206 10:45:05.472896  399286 command_runner.go:130] > #
	I1206 10:45:05.472902  399286 command_runner.go:130] > # default_mounts_file = ""
	I1206 10:45:05.472910  399286 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1206 10:45:05.472919  399286 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1206 10:45:05.472932  399286 command_runner.go:130] > # pids_limit = -1
	I1206 10:45:05.472938  399286 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1206 10:45:05.472947  399286 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1206 10:45:05.472961  399286 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1206 10:45:05.472979  399286 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1206 10:45:05.472983  399286 command_runner.go:130] > # log_size_max = -1
	I1206 10:45:05.472990  399286 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1206 10:45:05.472997  399286 command_runner.go:130] > # log_to_journald = false
	I1206 10:45:05.473006  399286 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1206 10:45:05.473011  399286 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1206 10:45:05.473016  399286 command_runner.go:130] > # Path to directory for container attach sockets.
	I1206 10:45:05.473024  399286 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1206 10:45:05.473032  399286 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1206 10:45:05.473036  399286 command_runner.go:130] > # bind_mount_prefix = ""
	I1206 10:45:05.473044  399286 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1206 10:45:05.473049  399286 command_runner.go:130] > # read_only = false
	I1206 10:45:05.473063  399286 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1206 10:45:05.473070  399286 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1206 10:45:05.473074  399286 command_runner.go:130] > # live configuration reload.
	I1206 10:45:05.473085  399286 command_runner.go:130] > # log_level = "info"
	I1206 10:45:05.473092  399286 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1206 10:45:05.473097  399286 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:45:05.473101  399286 command_runner.go:130] > # log_filter = ""
	I1206 10:45:05.473110  399286 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1206 10:45:05.473119  399286 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1206 10:45:05.473123  399286 command_runner.go:130] > # separated by comma.
	I1206 10:45:05.473132  399286 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1206 10:45:05.473138  399286 command_runner.go:130] > # uid_mappings = ""
	I1206 10:45:05.473145  399286 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1206 10:45:05.473155  399286 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1206 10:45:05.473162  399286 command_runner.go:130] > # separated by comma.
	I1206 10:45:05.473171  399286 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1206 10:45:05.473178  399286 command_runner.go:130] > # gid_mappings = ""
	I1206 10:45:05.473185  399286 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1206 10:45:05.473197  399286 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1206 10:45:05.473206  399286 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1206 10:45:05.473217  399286 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1206 10:45:05.473223  399286 command_runner.go:130] > # minimum_mappable_uid = -1
	I1206 10:45:05.473230  399286 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1206 10:45:05.473238  399286 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1206 10:45:05.473249  399286 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1206 10:45:05.473260  399286 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1206 10:45:05.473264  399286 command_runner.go:130] > # minimum_mappable_gid = -1
	I1206 10:45:05.473270  399286 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1206 10:45:05.473282  399286 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1206 10:45:05.473287  399286 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1206 10:45:05.473292  399286 command_runner.go:130] > # ctr_stop_timeout = 30
	I1206 10:45:05.473298  399286 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1206 10:45:05.473307  399286 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1206 10:45:05.473312  399286 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1206 10:45:05.473317  399286 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1206 10:45:05.473323  399286 command_runner.go:130] > # drop_infra_ctr = true
	I1206 10:45:05.473330  399286 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1206 10:45:05.473339  399286 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1206 10:45:05.473347  399286 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1206 10:45:05.473351  399286 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1206 10:45:05.473362  399286 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1206 10:45:05.473373  399286 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1206 10:45:05.473378  399286 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1206 10:45:05.473383  399286 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1206 10:45:05.473389  399286 command_runner.go:130] > # shared_cpuset = ""
	I1206 10:45:05.473397  399286 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1206 10:45:05.473408  399286 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1206 10:45:05.473415  399286 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1206 10:45:05.473423  399286 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1206 10:45:05.473429  399286 command_runner.go:130] > # pinns_path = ""
	I1206 10:45:05.473435  399286 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1206 10:45:05.473442  399286 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1206 10:45:05.473446  399286 command_runner.go:130] > # enable_criu_support = true
	I1206 10:45:05.473458  399286 command_runner.go:130] > # Enable/disable the generation of the container,
	I1206 10:45:05.473465  399286 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1206 10:45:05.473469  399286 command_runner.go:130] > # enable_pod_events = false
	I1206 10:45:05.473476  399286 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1206 10:45:05.473483  399286 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1206 10:45:05.473487  399286 command_runner.go:130] > # default_runtime = "crun"
	I1206 10:45:05.473492  399286 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1206 10:45:05.473502  399286 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1206 10:45:05.473513  399286 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1206 10:45:05.473521  399286 command_runner.go:130] > # creation as a file is not desired either.
	I1206 10:45:05.473531  399286 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1206 10:45:05.473540  399286 command_runner.go:130] > # the hostname is being managed dynamically.
	I1206 10:45:05.473551  399286 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1206 10:45:05.473554  399286 command_runner.go:130] > # ]
	I1206 10:45:05.473561  399286 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1206 10:45:05.473567  399286 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1206 10:45:05.473576  399286 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1206 10:45:05.473582  399286 command_runner.go:130] > # Each entry in the table should follow the format:
	I1206 10:45:05.473596  399286 command_runner.go:130] > #
	I1206 10:45:05.473602  399286 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1206 10:45:05.473606  399286 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1206 10:45:05.473610  399286 command_runner.go:130] > # runtime_type = "oci"
	I1206 10:45:05.473616  399286 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1206 10:45:05.473623  399286 command_runner.go:130] > # inherit_default_runtime = false
	I1206 10:45:05.473628  399286 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1206 10:45:05.473632  399286 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1206 10:45:05.473646  399286 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1206 10:45:05.473650  399286 command_runner.go:130] > # monitor_env = []
	I1206 10:45:05.473654  399286 command_runner.go:130] > # privileged_without_host_devices = false
	I1206 10:45:05.473659  399286 command_runner.go:130] > # allowed_annotations = []
	I1206 10:45:05.473667  399286 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1206 10:45:05.473673  399286 command_runner.go:130] > # no_sync_log = false
	I1206 10:45:05.473677  399286 command_runner.go:130] > # default_annotations = {}
	I1206 10:45:05.473682  399286 command_runner.go:130] > # stream_websockets = false
	I1206 10:45:05.473689  399286 command_runner.go:130] > # seccomp_profile = ""
	I1206 10:45:05.473708  399286 command_runner.go:130] > # Where:
	I1206 10:45:05.473717  399286 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1206 10:45:05.473724  399286 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1206 10:45:05.473730  399286 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1206 10:45:05.473739  399286 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1206 10:45:05.473743  399286 command_runner.go:130] > #   in $PATH.
	I1206 10:45:05.473749  399286 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1206 10:45:05.473754  399286 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1206 10:45:05.473763  399286 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1206 10:45:05.473768  399286 command_runner.go:130] > #   state.
	I1206 10:45:05.473775  399286 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1206 10:45:05.473789  399286 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1206 10:45:05.473796  399286 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1206 10:45:05.473802  399286 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1206 10:45:05.473810  399286 command_runner.go:130] > #   the values from the default runtime on load time.
	I1206 10:45:05.473816  399286 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1206 10:45:05.473824  399286 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1206 10:45:05.473834  399286 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1206 10:45:05.473841  399286 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1206 10:45:05.473846  399286 command_runner.go:130] > #   The currently recognized values are:
	I1206 10:45:05.473852  399286 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1206 10:45:05.473862  399286 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1206 10:45:05.473868  399286 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1206 10:45:05.473876  399286 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1206 10:45:05.473890  399286 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1206 10:45:05.473900  399286 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1206 10:45:05.473907  399286 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1206 10:45:05.473914  399286 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1206 10:45:05.473924  399286 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1206 10:45:05.473930  399286 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1206 10:45:05.473938  399286 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1206 10:45:05.473946  399286 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1206 10:45:05.473955  399286 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1206 10:45:05.473961  399286 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1206 10:45:05.473970  399286 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1206 10:45:05.473978  399286 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1206 10:45:05.473988  399286 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1206 10:45:05.473992  399286 command_runner.go:130] > #   deprecated option "conmon".
	I1206 10:45:05.474000  399286 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1206 10:45:05.474008  399286 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1206 10:45:05.474015  399286 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1206 10:45:05.474020  399286 command_runner.go:130] > #   should be moved to the container's cgroup
	I1206 10:45:05.474027  399286 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1206 10:45:05.474034  399286 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1206 10:45:05.474042  399286 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1206 10:45:05.474048  399286 command_runner.go:130] > #   conmon-rs by using:
	I1206 10:45:05.474057  399286 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1206 10:45:05.474070  399286 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1206 10:45:05.474077  399286 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1206 10:45:05.474091  399286 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1206 10:45:05.474096  399286 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1206 10:45:05.474106  399286 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1206 10:45:05.474114  399286 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1206 10:45:05.474122  399286 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1206 10:45:05.474130  399286 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1206 10:45:05.474143  399286 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1206 10:45:05.474148  399286 command_runner.go:130] > #   when a machine crash happens.
	I1206 10:45:05.474159  399286 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1206 10:45:05.474172  399286 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1206 10:45:05.474181  399286 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1206 10:45:05.474188  399286 command_runner.go:130] > #   seccomp profile for the runtime.
	I1206 10:45:05.474212  399286 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1206 10:45:05.474223  399286 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1206 10:45:05.474227  399286 command_runner.go:130] > #
	I1206 10:45:05.474233  399286 command_runner.go:130] > # Using the seccomp notifier feature:
	I1206 10:45:05.474236  399286 command_runner.go:130] > #
	I1206 10:45:05.474244  399286 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1206 10:45:05.474254  399286 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1206 10:45:05.474257  399286 command_runner.go:130] > #
	I1206 10:45:05.474264  399286 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1206 10:45:05.474273  399286 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1206 10:45:05.474276  399286 command_runner.go:130] > #
	I1206 10:45:05.474283  399286 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1206 10:45:05.474286  399286 command_runner.go:130] > # feature.
	I1206 10:45:05.474289  399286 command_runner.go:130] > #
	I1206 10:45:05.474299  399286 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1206 10:45:05.474307  399286 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1206 10:45:05.474314  399286 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1206 10:45:05.474322  399286 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1206 10:45:05.474329  399286 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1206 10:45:05.474336  399286 command_runner.go:130] > #
	I1206 10:45:05.474344  399286 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1206 10:45:05.474350  399286 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1206 10:45:05.474354  399286 command_runner.go:130] > #
	I1206 10:45:05.474361  399286 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1206 10:45:05.474371  399286 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1206 10:45:05.474374  399286 command_runner.go:130] > #
	I1206 10:45:05.474380  399286 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1206 10:45:05.474386  399286 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1206 10:45:05.474392  399286 command_runner.go:130] > # limitation.
	I1206 10:45:05.474401  399286 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1206 10:45:05.474409  399286 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1206 10:45:05.474413  399286 command_runner.go:130] > runtime_type = ""
	I1206 10:45:05.474417  399286 command_runner.go:130] > runtime_root = "/run/crun"
	I1206 10:45:05.474422  399286 command_runner.go:130] > inherit_default_runtime = false
	I1206 10:45:05.474426  399286 command_runner.go:130] > runtime_config_path = ""
	I1206 10:45:05.474432  399286 command_runner.go:130] > container_min_memory = ""
	I1206 10:45:05.474437  399286 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1206 10:45:05.474442  399286 command_runner.go:130] > monitor_cgroup = "pod"
	I1206 10:45:05.474448  399286 command_runner.go:130] > monitor_exec_cgroup = ""
	I1206 10:45:05.474453  399286 command_runner.go:130] > allowed_annotations = [
	I1206 10:45:05.474461  399286 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1206 10:45:05.474464  399286 command_runner.go:130] > ]
	I1206 10:45:05.474469  399286 command_runner.go:130] > privileged_without_host_devices = false
	I1206 10:45:05.474473  399286 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1206 10:45:05.474478  399286 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1206 10:45:05.474484  399286 command_runner.go:130] > runtime_type = ""
	I1206 10:45:05.474489  399286 command_runner.go:130] > runtime_root = "/run/runc"
	I1206 10:45:05.474496  399286 command_runner.go:130] > inherit_default_runtime = false
	I1206 10:45:05.474501  399286 command_runner.go:130] > runtime_config_path = ""
	I1206 10:45:05.474506  399286 command_runner.go:130] > container_min_memory = ""
	I1206 10:45:05.474513  399286 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1206 10:45:05.474518  399286 command_runner.go:130] > monitor_cgroup = "pod"
	I1206 10:45:05.474522  399286 command_runner.go:130] > monitor_exec_cgroup = ""
	I1206 10:45:05.474530  399286 command_runner.go:130] > privileged_without_host_devices = false
	I1206 10:45:05.474540  399286 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1206 10:45:05.474548  399286 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1206 10:45:05.474556  399286 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1206 10:45:05.474564  399286 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1206 10:45:05.474575  399286 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1206 10:45:05.474592  399286 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1206 10:45:05.474602  399286 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1206 10:45:05.474610  399286 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1206 10:45:05.474622  399286 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1206 10:45:05.474635  399286 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1206 10:45:05.474642  399286 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1206 10:45:05.474652  399286 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1206 10:45:05.474656  399286 command_runner.go:130] > # Example:
	I1206 10:45:05.474664  399286 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1206 10:45:05.474672  399286 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1206 10:45:05.474677  399286 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1206 10:45:05.474686  399286 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1206 10:45:05.474691  399286 command_runner.go:130] > # cpuset = "0-1"
	I1206 10:45:05.474703  399286 command_runner.go:130] > # cpushares = "5"
	I1206 10:45:05.474708  399286 command_runner.go:130] > # cpuquota = "1000"
	I1206 10:45:05.474712  399286 command_runner.go:130] > # cpuperiod = "100000"
	I1206 10:45:05.474716  399286 command_runner.go:130] > # cpulimit = "35"
	I1206 10:45:05.474720  399286 command_runner.go:130] > # Where:
	I1206 10:45:05.474724  399286 command_runner.go:130] > # The workload name is workload-type.
	I1206 10:45:05.474738  399286 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1206 10:45:05.474744  399286 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1206 10:45:05.474749  399286 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1206 10:45:05.474761  399286 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1206 10:45:05.474777  399286 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1206 10:45:05.474783  399286 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1206 10:45:05.474790  399286 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1206 10:45:05.474797  399286 command_runner.go:130] > # Default value is set to true
	I1206 10:45:05.474803  399286 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1206 10:45:05.474809  399286 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1206 10:45:05.474821  399286 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1206 10:45:05.474826  399286 command_runner.go:130] > # Default value is set to 'false'
	I1206 10:45:05.474830  399286 command_runner.go:130] > # disable_hostport_mapping = false
	I1206 10:45:05.474836  399286 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1206 10:45:05.474847  399286 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1206 10:45:05.474853  399286 command_runner.go:130] > # timezone = ""
	I1206 10:45:05.474860  399286 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1206 10:45:05.474866  399286 command_runner.go:130] > #
	I1206 10:45:05.474874  399286 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1206 10:45:05.474883  399286 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1206 10:45:05.474889  399286 command_runner.go:130] > [crio.image]
	I1206 10:45:05.474895  399286 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1206 10:45:05.474899  399286 command_runner.go:130] > # default_transport = "docker://"
	I1206 10:45:05.474913  399286 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1206 10:45:05.474920  399286 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1206 10:45:05.474924  399286 command_runner.go:130] > # global_auth_file = ""
	I1206 10:45:05.474929  399286 command_runner.go:130] > # The image used to instantiate infra containers.
	I1206 10:45:05.474938  399286 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:45:05.474943  399286 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1206 10:45:05.474952  399286 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1206 10:45:05.474959  399286 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1206 10:45:05.474967  399286 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:45:05.474972  399286 command_runner.go:130] > # pause_image_auth_file = ""
	I1206 10:45:05.474977  399286 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1206 10:45:05.474984  399286 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1206 10:45:05.474994  399286 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1206 10:45:05.475000  399286 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1206 10:45:05.475009  399286 command_runner.go:130] > # pause_command = "/pause"
	I1206 10:45:05.475015  399286 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1206 10:45:05.475021  399286 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1206 10:45:05.475030  399286 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1206 10:45:05.475036  399286 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1206 10:45:05.475044  399286 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1206 10:45:05.475051  399286 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1206 10:45:05.475058  399286 command_runner.go:130] > # pinned_images = [
	I1206 10:45:05.475061  399286 command_runner.go:130] > # ]
	I1206 10:45:05.475067  399286 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1206 10:45:05.475074  399286 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1206 10:45:05.475083  399286 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1206 10:45:05.475090  399286 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1206 10:45:05.475098  399286 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1206 10:45:05.475104  399286 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1206 10:45:05.475110  399286 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1206 10:45:05.475120  399286 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1206 10:45:05.475129  399286 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1206 10:45:05.475138  399286 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1206 10:45:05.475145  399286 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1206 10:45:05.475150  399286 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1206 10:45:05.475156  399286 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1206 10:45:05.475165  399286 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1206 10:45:05.475169  399286 command_runner.go:130] > # changing them here.
	I1206 10:45:05.475176  399286 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1206 10:45:05.475183  399286 command_runner.go:130] > # insecure_registries = [
	I1206 10:45:05.475186  399286 command_runner.go:130] > # ]
	I1206 10:45:05.475193  399286 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1206 10:45:05.475201  399286 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1206 10:45:05.475208  399286 command_runner.go:130] > # image_volumes = "mkdir"
	I1206 10:45:05.475214  399286 command_runner.go:130] > # Temporary directory to use for storing big files
	I1206 10:45:05.475220  399286 command_runner.go:130] > # big_files_temporary_dir = ""
	I1206 10:45:05.475226  399286 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1206 10:45:05.475236  399286 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1206 10:45:05.475241  399286 command_runner.go:130] > # auto_reload_registries = false
	I1206 10:45:05.475247  399286 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1206 10:45:05.475257  399286 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1206 10:45:05.475267  399286 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1206 10:45:05.475271  399286 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1206 10:45:05.475277  399286 command_runner.go:130] > # The mode of short name resolution.
	I1206 10:45:05.475284  399286 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1206 10:45:05.475293  399286 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1206 10:45:05.475298  399286 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1206 10:45:05.475314  399286 command_runner.go:130] > # short_name_mode = "enforcing"
	I1206 10:45:05.475321  399286 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1206 10:45:05.475327  399286 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1206 10:45:05.475335  399286 command_runner.go:130] > # oci_artifact_mount_support = true
	I1206 10:45:05.475343  399286 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1206 10:45:05.475349  399286 command_runner.go:130] > # CNI plugins.
	I1206 10:45:05.475353  399286 command_runner.go:130] > [crio.network]
	I1206 10:45:05.475360  399286 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1206 10:45:05.475368  399286 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1206 10:45:05.475386  399286 command_runner.go:130] > # cni_default_network = ""
	I1206 10:45:05.475398  399286 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1206 10:45:05.475407  399286 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1206 10:45:05.475413  399286 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1206 10:45:05.475419  399286 command_runner.go:130] > # plugin_dirs = [
	I1206 10:45:05.475424  399286 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1206 10:45:05.475429  399286 command_runner.go:130] > # ]
	I1206 10:45:05.475434  399286 command_runner.go:130] > # List of included pod metrics.
	I1206 10:45:05.475441  399286 command_runner.go:130] > # included_pod_metrics = [
	I1206 10:45:05.475445  399286 command_runner.go:130] > # ]
	I1206 10:45:05.475451  399286 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1206 10:45:05.475457  399286 command_runner.go:130] > [crio.metrics]
	I1206 10:45:05.475463  399286 command_runner.go:130] > # Globally enable or disable metrics support.
	I1206 10:45:05.475467  399286 command_runner.go:130] > # enable_metrics = false
	I1206 10:45:05.475472  399286 command_runner.go:130] > # Specify enabled metrics collectors.
	I1206 10:45:05.475476  399286 command_runner.go:130] > # Per default all metrics are enabled.
	I1206 10:45:05.475483  399286 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1206 10:45:05.475490  399286 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1206 10:45:05.475497  399286 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1206 10:45:05.475501  399286 command_runner.go:130] > # metrics_collectors = [
	I1206 10:45:05.475505  399286 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1206 10:45:05.475510  399286 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1206 10:45:05.475518  399286 command_runner.go:130] > # 	"containers_oom_total",
	I1206 10:45:05.475522  399286 command_runner.go:130] > # 	"processes_defunct",
	I1206 10:45:05.475528  399286 command_runner.go:130] > # 	"operations_total",
	I1206 10:45:05.475533  399286 command_runner.go:130] > # 	"operations_latency_seconds",
	I1206 10:45:05.475540  399286 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1206 10:45:05.475547  399286 command_runner.go:130] > # 	"operations_errors_total",
	I1206 10:45:05.475554  399286 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1206 10:45:05.475559  399286 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1206 10:45:05.475564  399286 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1206 10:45:05.475571  399286 command_runner.go:130] > # 	"image_pulls_success_total",
	I1206 10:45:05.475576  399286 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1206 10:45:05.475583  399286 command_runner.go:130] > # 	"containers_oom_count_total",
	I1206 10:45:05.475590  399286 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1206 10:45:05.475602  399286 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1206 10:45:05.475607  399286 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1206 10:45:05.475610  399286 command_runner.go:130] > # ]
	I1206 10:45:05.475616  399286 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1206 10:45:05.475620  399286 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1206 10:45:05.475626  399286 command_runner.go:130] > # The port on which the metrics server will listen.
	I1206 10:45:05.475639  399286 command_runner.go:130] > # metrics_port = 9090
	I1206 10:45:05.475646  399286 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1206 10:45:05.475649  399286 command_runner.go:130] > # metrics_socket = ""
	I1206 10:45:05.475657  399286 command_runner.go:130] > # The certificate for the secure metrics server.
	I1206 10:45:05.475670  399286 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1206 10:45:05.475677  399286 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1206 10:45:05.475691  399286 command_runner.go:130] > # certificate on any modification event.
	I1206 10:45:05.475695  399286 command_runner.go:130] > # metrics_cert = ""
	I1206 10:45:05.475703  399286 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1206 10:45:05.475708  399286 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1206 10:45:05.475712  399286 command_runner.go:130] > # metrics_key = ""
	I1206 10:45:05.475720  399286 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1206 10:45:05.475727  399286 command_runner.go:130] > [crio.tracing]
	I1206 10:45:05.475732  399286 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1206 10:45:05.475737  399286 command_runner.go:130] > # enable_tracing = false
	I1206 10:45:05.475748  399286 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1206 10:45:05.475753  399286 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1206 10:45:05.475767  399286 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1206 10:45:05.475772  399286 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1206 10:45:05.475781  399286 command_runner.go:130] > # CRI-O NRI configuration.
	I1206 10:45:05.475784  399286 command_runner.go:130] > [crio.nri]
	I1206 10:45:05.475789  399286 command_runner.go:130] > # Globally enable or disable NRI.
	I1206 10:45:05.475792  399286 command_runner.go:130] > # enable_nri = true
	I1206 10:45:05.475799  399286 command_runner.go:130] > # NRI socket to listen on.
	I1206 10:45:05.475804  399286 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1206 10:45:05.475811  399286 command_runner.go:130] > # NRI plugin directory to use.
	I1206 10:45:05.475817  399286 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1206 10:45:05.475825  399286 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1206 10:45:05.475830  399286 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1206 10:45:05.475835  399286 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1206 10:45:05.475891  399286 command_runner.go:130] > # nri_disable_connections = false
	I1206 10:45:05.475901  399286 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1206 10:45:05.475906  399286 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1206 10:45:05.475911  399286 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1206 10:45:05.475918  399286 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1206 10:45:05.475923  399286 command_runner.go:130] > # NRI default validator configuration.
	I1206 10:45:05.475933  399286 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1206 10:45:05.475940  399286 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1206 10:45:05.475946  399286 command_runner.go:130] > # can be restricted/rejected:
	I1206 10:45:05.475950  399286 command_runner.go:130] > # - OCI hook injection
	I1206 10:45:05.475958  399286 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1206 10:45:05.475964  399286 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1206 10:45:05.475969  399286 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1206 10:45:05.475976  399286 command_runner.go:130] > # - adjustment of linux namespaces
	I1206 10:45:05.475983  399286 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1206 10:45:05.475990  399286 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1206 10:45:05.476000  399286 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1206 10:45:05.476005  399286 command_runner.go:130] > #
	I1206 10:45:05.476012  399286 command_runner.go:130] > # [crio.nri.default_validator]
	I1206 10:45:05.476020  399286 command_runner.go:130] > # nri_enable_default_validator = false
	I1206 10:45:05.476026  399286 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1206 10:45:05.476035  399286 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1206 10:45:05.476042  399286 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1206 10:45:05.476048  399286 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1206 10:45:05.476056  399286 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1206 10:45:05.476061  399286 command_runner.go:130] > # nri_validator_required_plugins = [
	I1206 10:45:05.476064  399286 command_runner.go:130] > # ]
	I1206 10:45:05.476070  399286 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1206 10:45:05.476079  399286 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1206 10:45:05.476083  399286 command_runner.go:130] > [crio.stats]
	I1206 10:45:05.476089  399286 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1206 10:45:05.476095  399286 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1206 10:45:05.476102  399286 command_runner.go:130] > # stats_collection_period = 0
	I1206 10:45:05.476109  399286 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1206 10:45:05.476119  399286 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1206 10:45:05.476124  399286 command_runner.go:130] > # collection_period = 0
	I1206 10:45:05.476211  399286 cni.go:84] Creating CNI manager for ""
	I1206 10:45:05.476226  399286 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:45:05.476254  399286 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 10:45:05.476282  399286 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-196950 NodeName:functional-196950 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:45:05.476417  399286 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-196950"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:45:05.476505  399286 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 10:45:05.483758  399286 command_runner.go:130] > kubeadm
	I1206 10:45:05.483779  399286 command_runner.go:130] > kubectl
	I1206 10:45:05.483784  399286 command_runner.go:130] > kubelet
	I1206 10:45:05.484784  399286 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:45:05.484852  399286 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:45:05.492924  399286 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1206 10:45:05.506239  399286 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 10:45:05.519506  399286 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1206 10:45:05.533524  399286 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:45:05.537326  399286 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1206 10:45:05.537418  399286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:45:05.647140  399286 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:45:05.721344  399286 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950 for IP: 192.168.49.2
	I1206 10:45:05.721367  399286 certs.go:195] generating shared ca certs ...
	I1206 10:45:05.721384  399286 certs.go:227] acquiring lock for ca certs: {Name:mke2ec61a37b6f3abbcbeb9abd23d6a19d011dd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:45:05.721593  399286 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.key
	I1206 10:45:05.721667  399286 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.key
	I1206 10:45:05.721683  399286 certs.go:257] generating profile certs ...
	I1206 10:45:05.721813  399286 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.key
	I1206 10:45:05.721910  399286 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.key.a77b39a6
	I1206 10:45:05.721994  399286 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.key
	I1206 10:45:05.722034  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1206 10:45:05.722057  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1206 10:45:05.722073  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1206 10:45:05.722118  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1206 10:45:05.722158  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1206 10:45:05.722199  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1206 10:45:05.722217  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1206 10:45:05.722228  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1206 10:45:05.722301  399286 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855.pem (1338 bytes)
	W1206 10:45:05.722365  399286 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855_empty.pem, impossibly tiny 0 bytes
	I1206 10:45:05.722388  399286 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 10:45:05.722448  399286 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem (1082 bytes)
	I1206 10:45:05.722502  399286 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:45:05.722537  399286 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem (1679 bytes)
	I1206 10:45:05.722611  399286 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem (1708 bytes)
	I1206 10:45:05.722670  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:45:05.722691  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855.pem -> /usr/share/ca-certificates/364855.pem
	I1206 10:45:05.722718  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem -> /usr/share/ca-certificates/3648552.pem
	I1206 10:45:05.723349  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:45:05.743026  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 10:45:05.763126  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:45:05.783337  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 10:45:05.802756  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 10:45:05.821457  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 10:45:05.839993  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:45:05.858402  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 10:45:05.876528  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:45:05.894729  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855.pem --> /usr/share/ca-certificates/364855.pem (1338 bytes)
	I1206 10:45:05.912947  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem --> /usr/share/ca-certificates/3648552.pem (1708 bytes)
	I1206 10:45:05.931356  399286 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:45:05.945284  399286 ssh_runner.go:195] Run: openssl version
	I1206 10:45:05.951573  399286 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1206 10:45:05.951648  399286 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:45:05.959293  399286 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:45:05.967114  399286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:45:05.970832  399286 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  6 10:26 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:45:05.971103  399286 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:26 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:45:05.971168  399286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:45:06.014236  399286 command_runner.go:130] > b5213941
	I1206 10:45:06.014768  399286 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:45:06.023097  399286 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364855.pem
	I1206 10:45:06.030984  399286 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364855.pem /etc/ssl/certs/364855.pem
	I1206 10:45:06.039316  399286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364855.pem
	I1206 10:45:06.043457  399286 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  6 10:36 /usr/share/ca-certificates/364855.pem
	I1206 10:45:06.043549  399286 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:36 /usr/share/ca-certificates/364855.pem
	I1206 10:45:06.043624  399286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364855.pem
	I1206 10:45:06.084760  399286 command_runner.go:130] > 51391683
	I1206 10:45:06.084914  399286 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 10:45:06.092772  399286 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648552.pem
	I1206 10:45:06.100248  399286 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648552.pem /etc/ssl/certs/3648552.pem
	I1206 10:45:06.107970  399286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648552.pem
	I1206 10:45:06.112031  399286 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  6 10:36 /usr/share/ca-certificates/3648552.pem
	I1206 10:45:06.112134  399286 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:36 /usr/share/ca-certificates/3648552.pem
	I1206 10:45:06.112229  399286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648552.pem
	I1206 10:45:06.152822  399286 command_runner.go:130] > 3ec20f2e
	I1206 10:45:06.153315  399286 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 10:45:06.161105  399286 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:45:06.165043  399286 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:45:06.165068  399286 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1206 10:45:06.165075  399286 command_runner.go:130] > Device: 259,1	Inode: 1826360     Links: 1
	I1206 10:45:06.165081  399286 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 10:45:06.165087  399286 command_runner.go:130] > Access: 2025-12-06 10:40:58.003190996 +0000
	I1206 10:45:06.165092  399286 command_runner.go:130] > Modify: 2025-12-06 10:36:53.916464205 +0000
	I1206 10:45:06.165098  399286 command_runner.go:130] > Change: 2025-12-06 10:36:53.916464205 +0000
	I1206 10:45:06.165103  399286 command_runner.go:130] >  Birth: 2025-12-06 10:36:53.916464205 +0000
	I1206 10:45:06.165195  399286 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 10:45:06.207365  399286 command_runner.go:130] > Certificate will not expire
	I1206 10:45:06.207850  399286 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 10:45:06.248448  399286 command_runner.go:130] > Certificate will not expire
	I1206 10:45:06.248932  399286 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 10:45:06.289656  399286 command_runner.go:130] > Certificate will not expire
	I1206 10:45:06.290116  399286 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 10:45:06.330828  399286 command_runner.go:130] > Certificate will not expire
	I1206 10:45:06.331412  399286 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 10:45:06.372096  399286 command_runner.go:130] > Certificate will not expire
	I1206 10:45:06.372595  399286 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 10:45:06.413596  399286 command_runner.go:130] > Certificate will not expire
	I1206 10:45:06.414056  399286 kubeadm.go:401] StartCluster: {Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:45:06.414151  399286 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:45:06.414217  399286 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:45:06.442677  399286 cri.go:89] found id: ""
	I1206 10:45:06.442751  399286 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:45:06.449938  399286 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1206 10:45:06.449962  399286 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1206 10:45:06.449969  399286 command_runner.go:130] > /var/lib/minikube/etcd:
	I1206 10:45:06.450931  399286 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 10:45:06.450952  399286 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 10:45:06.451032  399286 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 10:45:06.459080  399286 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:45:06.459618  399286 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-196950" does not appear in /home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 10:45:06.459742  399286 kubeconfig.go:62] /home/jenkins/minikube-integration/22047-362985/kubeconfig needs updating (will repair): [kubeconfig missing "functional-196950" cluster setting kubeconfig missing "functional-196950" context setting]
	I1206 10:45:06.460016  399286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/kubeconfig: {Name:mk779651834cfbdc6f0b5e8f5a9abc0f05106181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:45:06.460484  399286 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 10:45:06.460638  399286 kapi.go:59] client config for functional-196950: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.crt", KeyFile:"/home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.key", CAFile:"/home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 10:45:06.461238  399286 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1206 10:45:06.461268  399286 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1206 10:45:06.461280  399286 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1206 10:45:06.461291  399286 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1206 10:45:06.461295  399286 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1206 10:45:06.461337  399286 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1206 10:45:06.461637  399286 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 10:45:06.473548  399286 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1206 10:45:06.473584  399286 kubeadm.go:602] duration metric: took 22.626231ms to restartPrimaryControlPlane
	I1206 10:45:06.473594  399286 kubeadm.go:403] duration metric: took 59.544914ms to StartCluster
	I1206 10:45:06.473609  399286 settings.go:142] acquiring lock: {Name:mk789e01bfd4ab9fa1e2a8415fa99b570b26926a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:45:06.473671  399286 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 10:45:06.474312  399286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/kubeconfig: {Name:mk779651834cfbdc6f0b5e8f5a9abc0f05106181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:45:06.474518  399286 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 10:45:06.474963  399286 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 10:45:06.475042  399286 addons.go:70] Setting storage-provisioner=true in profile "functional-196950"
	I1206 10:45:06.475066  399286 addons.go:239] Setting addon storage-provisioner=true in "functional-196950"
	I1206 10:45:06.475092  399286 host.go:66] Checking if "functional-196950" exists ...
	I1206 10:45:06.475912  399286 cli_runner.go:164] Run: docker container inspect functional-196950 --format={{.State.Status}}
	I1206 10:45:06.476264  399286 config.go:182] Loaded profile config "functional-196950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:45:06.476354  399286 addons.go:70] Setting default-storageclass=true in profile "functional-196950"
	I1206 10:45:06.476394  399286 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-196950"
	I1206 10:45:06.476791  399286 cli_runner.go:164] Run: docker container inspect functional-196950 --format={{.State.Status}}
	I1206 10:45:06.481213  399286 out.go:179] * Verifying Kubernetes components...
	I1206 10:45:06.484465  399286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:45:06.517764  399286 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 10:45:06.517930  399286 kapi.go:59] client config for functional-196950: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.crt", KeyFile:"/home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.key", CAFile:"/home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 10:45:06.518202  399286 addons.go:239] Setting addon default-storageclass=true in "functional-196950"
	I1206 10:45:06.518232  399286 host.go:66] Checking if "functional-196950" exists ...
	I1206 10:45:06.518684  399286 cli_runner.go:164] Run: docker container inspect functional-196950 --format={{.State.Status}}
	I1206 10:45:06.522254  399286 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 10:45:06.525206  399286 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:06.525232  399286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 10:45:06.525299  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:06.551517  399286 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:06.551540  399286 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 10:45:06.551605  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:06.570954  399286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:45:06.593327  399286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:45:06.685314  399286 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:45:06.722168  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:06.737572  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:07.472063  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:07.472098  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:07.472124  399286 retry.go:31] will retry after 153.213078ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:07.472168  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:07.472179  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:07.472186  399286 retry.go:31] will retry after 247.840204ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:07.472279  399286 node_ready.go:35] waiting up to 6m0s for node "functional-196950" to be "Ready" ...
	I1206 10:45:07.472418  399286 type.go:168] "Request Body" body=""
	I1206 10:45:07.472509  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:07.472828  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:07.626184  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:07.684274  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:07.688010  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:07.688045  399286 retry.go:31] will retry after 503.005947ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:07.720209  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:07.781565  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:07.785057  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:07.785089  399286 retry.go:31] will retry after 443.254463ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:07.973439  399286 type.go:168] "Request Body" body=""
	I1206 10:45:07.973529  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:07.974023  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:08.191658  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:08.229200  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:08.274450  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:08.282645  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:08.282730  399286 retry.go:31] will retry after 342.048952ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:08.327096  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:08.327147  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:08.327166  399286 retry.go:31] will retry after 504.811759ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:08.473470  399286 type.go:168] "Request Body" body=""
	I1206 10:45:08.473573  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:08.473913  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:08.625427  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:08.684176  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:08.687968  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:08.688010  399286 retry.go:31] will retry after 1.261411242s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:08.832256  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:08.891180  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:08.894801  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:08.894836  399286 retry.go:31] will retry after 546.340513ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:08.973077  399286 type.go:168] "Request Body" body=""
	I1206 10:45:08.973155  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:08.973522  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:09.442273  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:09.472729  399286 type.go:168] "Request Body" body=""
	I1206 10:45:09.472803  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:09.473092  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:09.473139  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:09.506571  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:09.510870  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:09.510955  399286 retry.go:31] will retry after 985.837399ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:09.950606  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:09.973212  399286 type.go:168] "Request Body" body=""
	I1206 10:45:09.973298  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:09.973577  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:10.030286  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:10.030402  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:10.030452  399286 retry.go:31] will retry after 829.97822ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:10.472519  399286 type.go:168] "Request Body" body=""
	I1206 10:45:10.472588  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:10.472971  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:10.497156  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:10.582698  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:10.582757  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:10.582779  399286 retry.go:31] will retry after 2.303396874s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:10.861265  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:10.923027  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:10.923124  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:10.923150  399286 retry.go:31] will retry after 2.722563752s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:10.973315  399286 type.go:168] "Request Body" body=""
	I1206 10:45:10.973396  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:10.973700  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:11.473530  399286 type.go:168] "Request Body" body=""
	I1206 10:45:11.473608  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:11.474011  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:11.474073  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:11.972906  399286 type.go:168] "Request Body" body=""
	I1206 10:45:11.972979  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:11.973246  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:12.472617  399286 type.go:168] "Request Body" body=""
	I1206 10:45:12.472696  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:12.473071  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:12.886451  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:12.946418  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:12.951114  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:12.951151  399286 retry.go:31] will retry after 2.435253477s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:12.973196  399286 type.go:168] "Request Body" body=""
	I1206 10:45:12.973267  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:12.973628  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:13.473384  399286 type.go:168] "Request Body" body=""
	I1206 10:45:13.473455  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:13.473719  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:13.646250  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:13.707346  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:13.707418  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:13.707442  399286 retry.go:31] will retry after 2.81497333s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:13.972564  399286 type.go:168] "Request Body" body=""
	I1206 10:45:13.972648  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:13.972980  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:13.973040  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:14.472608  399286 type.go:168] "Request Body" body=""
	I1206 10:45:14.472684  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:14.473066  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:14.972534  399286 type.go:168] "Request Body" body=""
	I1206 10:45:14.972625  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:14.972955  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:15.386668  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:15.447515  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:15.447555  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:15.447573  399286 retry.go:31] will retry after 2.327509257s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:15.472847  399286 type.go:168] "Request Body" body=""
	I1206 10:45:15.472922  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:15.473272  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:15.973226  399286 type.go:168] "Request Body" body=""
	I1206 10:45:15.973305  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:15.973654  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:15.973708  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:16.473465  399286 type.go:168] "Request Body" body=""
	I1206 10:45:16.473539  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:16.473810  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:16.523188  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:16.580568  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:16.584128  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:16.584161  399286 retry.go:31] will retry after 3.565207529s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:16.972816  399286 type.go:168] "Request Body" body=""
	I1206 10:45:16.972893  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:16.973236  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:17.472948  399286 type.go:168] "Request Body" body=""
	I1206 10:45:17.473028  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:17.473355  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:17.775942  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:17.833742  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:17.838032  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:17.838073  399286 retry.go:31] will retry after 9.046125485s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:17.973259  399286 type.go:168] "Request Body" body=""
	I1206 10:45:17.973333  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:17.973605  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:18.473464  399286 type.go:168] "Request Body" body=""
	I1206 10:45:18.473544  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:18.473887  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:18.473936  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:18.972571  399286 type.go:168] "Request Body" body=""
	I1206 10:45:18.972668  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:18.973005  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:19.472497  399286 type.go:168] "Request Body" body=""
	I1206 10:45:19.472590  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:19.472870  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:19.972598  399286 type.go:168] "Request Body" body=""
	I1206 10:45:19.972674  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:19.972970  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:20.150467  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:20.215833  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:20.215885  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:20.215905  399286 retry.go:31] will retry after 9.222024728s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:20.473247  399286 type.go:168] "Request Body" body=""
	I1206 10:45:20.473322  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:20.473670  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:20.973445  399286 type.go:168] "Request Body" body=""
	I1206 10:45:20.973528  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:20.973801  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:20.973861  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:21.472555  399286 type.go:168] "Request Body" body=""
	I1206 10:45:21.472664  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:21.473020  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:21.972799  399286 type.go:168] "Request Body" body=""
	I1206 10:45:21.972877  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:21.973219  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:22.472497  399286 type.go:168] "Request Body" body=""
	I1206 10:45:22.472576  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:22.472904  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:22.972589  399286 type.go:168] "Request Body" body=""
	I1206 10:45:22.972674  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:22.973015  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:23.472753  399286 type.go:168] "Request Body" body=""
	I1206 10:45:23.472835  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:23.473181  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:23.473243  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:23.972733  399286 type.go:168] "Request Body" body=""
	I1206 10:45:23.972804  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:23.973079  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:24.472742  399286 type.go:168] "Request Body" body=""
	I1206 10:45:24.472825  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:24.473193  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:24.972805  399286 type.go:168] "Request Body" body=""
	I1206 10:45:24.972890  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:24.973299  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:25.473054  399286 type.go:168] "Request Body" body=""
	I1206 10:45:25.473127  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:25.473403  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:25.473453  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:25.972761  399286 type.go:168] "Request Body" body=""
	I1206 10:45:25.972834  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:25.973177  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:26.473056  399286 type.go:168] "Request Body" body=""
	I1206 10:45:26.473132  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:26.473476  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:26.884353  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:26.943184  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:26.947029  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:26.947062  399286 retry.go:31] will retry after 13.756266916s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:26.973239  399286 type.go:168] "Request Body" body=""
	I1206 10:45:26.973309  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:26.973589  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:27.473507  399286 type.go:168] "Request Body" body=""
	I1206 10:45:27.473585  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:27.473949  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:27.474006  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:27.972689  399286 type.go:168] "Request Body" body=""
	I1206 10:45:27.972763  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:27.973145  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:28.472835  399286 type.go:168] "Request Body" body=""
	I1206 10:45:28.472909  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:28.473194  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:28.972604  399286 type.go:168] "Request Body" body=""
	I1206 10:45:28.972682  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:28.972972  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:29.438741  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:29.473252  399286 type.go:168] "Request Body" body=""
	I1206 10:45:29.473342  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:29.473619  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:29.500011  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:29.500052  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:29.500073  399286 retry.go:31] will retry after 11.458105653s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:29.972514  399286 type.go:168] "Request Body" body=""
	I1206 10:45:29.972601  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:29.972925  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:29.972975  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:30.472573  399286 type.go:168] "Request Body" body=""
	I1206 10:45:30.472647  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:30.472967  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:30.972595  399286 type.go:168] "Request Body" body=""
	I1206 10:45:30.972703  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:30.973084  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:31.472784  399286 type.go:168] "Request Body" body=""
	I1206 10:45:31.472855  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:31.473199  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:31.972958  399286 type.go:168] "Request Body" body=""
	I1206 10:45:31.973040  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:31.973376  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:31.973432  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:32.473378  399286 type.go:168] "Request Body" body=""
	I1206 10:45:32.473454  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:32.473784  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:32.972445  399286 type.go:168] "Request Body" body=""
	I1206 10:45:32.972534  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:32.972822  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:33.472492  399286 type.go:168] "Request Body" body=""
	I1206 10:45:33.472570  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:33.472871  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:33.972494  399286 type.go:168] "Request Body" body=""
	I1206 10:45:33.972591  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:33.972945  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:34.472578  399286 type.go:168] "Request Body" body=""
	I1206 10:45:34.472650  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:34.473009  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:34.473064  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:34.972732  399286 type.go:168] "Request Body" body=""
	I1206 10:45:34.972808  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:34.973199  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:35.472761  399286 type.go:168] "Request Body" body=""
	I1206 10:45:35.472857  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:35.473192  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:35.972534  399286 type.go:168] "Request Body" body=""
	I1206 10:45:35.972619  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:35.972903  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:36.472813  399286 type.go:168] "Request Body" body=""
	I1206 10:45:36.472898  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:36.473245  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:36.473300  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:36.972930  399286 type.go:168] "Request Body" body=""
	I1206 10:45:36.973016  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:36.973389  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:37.473181  399286 type.go:168] "Request Body" body=""
	I1206 10:45:37.473253  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:37.473531  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:37.973319  399286 type.go:168] "Request Body" body=""
	I1206 10:45:37.973403  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:37.973730  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:38.472498  399286 type.go:168] "Request Body" body=""
	I1206 10:45:38.472583  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:38.472928  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:38.972624  399286 type.go:168] "Request Body" body=""
	I1206 10:45:38.972703  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:38.973126  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:38.973176  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:39.472568  399286 type.go:168] "Request Body" body=""
	I1206 10:45:39.472665  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:39.472987  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:39.972728  399286 type.go:168] "Request Body" body=""
	I1206 10:45:39.972805  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:39.973175  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:40.473384  399286 type.go:168] "Request Body" body=""
	I1206 10:45:40.473456  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:40.473714  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:40.704276  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:40.766032  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:40.766082  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:40.766102  399286 retry.go:31] will retry after 12.834175432s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:40.958402  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:40.972905  399286 type.go:168] "Request Body" body=""
	I1206 10:45:40.972992  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:40.973301  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:40.973353  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:41.030830  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:41.030878  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:41.030900  399286 retry.go:31] will retry after 14.333484689s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:41.472501  399286 type.go:168] "Request Body" body=""
	I1206 10:45:41.472600  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:41.472944  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:41.972853  399286 type.go:168] "Request Body" body=""
	I1206 10:45:41.972920  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:41.973187  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:42.472557  399286 type.go:168] "Request Body" body=""
	I1206 10:45:42.472636  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:42.472968  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:42.972558  399286 type.go:168] "Request Body" body=""
	I1206 10:45:42.972635  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:42.972937  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:43.472504  399286 type.go:168] "Request Body" body=""
	I1206 10:45:43.472579  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:43.472849  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:43.472893  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:43.972555  399286 type.go:168] "Request Body" body=""
	I1206 10:45:43.972629  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:43.972940  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:44.472608  399286 type.go:168] "Request Body" body=""
	I1206 10:45:44.472707  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:44.473088  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:44.972724  399286 type.go:168] "Request Body" body=""
	I1206 10:45:44.972794  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:44.973077  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:45.472782  399286 type.go:168] "Request Body" body=""
	I1206 10:45:45.472865  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:45.473241  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:45.473304  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:45.972826  399286 type.go:168] "Request Body" body=""
	I1206 10:45:45.972906  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:45.973262  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:46.473108  399286 type.go:168] "Request Body" body=""
	I1206 10:45:46.473196  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:46.473467  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:46.973436  399286 type.go:168] "Request Body" body=""
	I1206 10:45:46.973508  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:46.973863  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:47.472551  399286 type.go:168] "Request Body" body=""
	I1206 10:45:47.472626  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:47.472969  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:47.972653  399286 type.go:168] "Request Body" body=""
	I1206 10:45:47.972724  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:47.972985  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:47.973026  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:48.472555  399286 type.go:168] "Request Body" body=""
	I1206 10:45:48.472631  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:48.472979  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:48.972567  399286 type.go:168] "Request Body" body=""
	I1206 10:45:48.972648  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:48.973011  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:49.472610  399286 type.go:168] "Request Body" body=""
	I1206 10:45:49.472682  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:49.473011  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:49.972715  399286 type.go:168] "Request Body" body=""
	I1206 10:45:49.972814  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:49.973135  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:49.973192  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:50.472573  399286 type.go:168] "Request Body" body=""
	I1206 10:45:50.472649  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:50.473004  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:50.972718  399286 type.go:168] "Request Body" body=""
	I1206 10:45:50.972788  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:50.973064  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:51.472737  399286 type.go:168] "Request Body" body=""
	I1206 10:45:51.472812  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:51.473132  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:51.972870  399286 type.go:168] "Request Body" body=""
	I1206 10:45:51.972960  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:51.973314  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:51.973366  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:52.472505  399286 type.go:168] "Request Body" body=""
	I1206 10:45:52.472573  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:52.472847  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:52.972578  399286 type.go:168] "Request Body" body=""
	I1206 10:45:52.972662  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:52.973040  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:53.472618  399286 type.go:168] "Request Body" body=""
	I1206 10:45:53.472697  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:53.473027  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:53.600459  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:53.661736  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:53.665292  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:53.665323  399286 retry.go:31] will retry after 22.486760262s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:53.972617  399286 type.go:168] "Request Body" body=""
	I1206 10:45:53.972697  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:53.972964  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:54.472589  399286 type.go:168] "Request Body" body=""
	I1206 10:45:54.472671  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:54.473035  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:54.473093  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:54.972750  399286 type.go:168] "Request Body" body=""
	I1206 10:45:54.972837  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:54.973175  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:55.364722  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:55.425632  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:55.425678  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:55.425713  399286 retry.go:31] will retry after 12.507538253s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:55.472809  399286 type.go:168] "Request Body" body=""
	I1206 10:45:55.472887  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:55.473184  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:55.972552  399286 type.go:168] "Request Body" body=""
	I1206 10:45:55.972650  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:55.972997  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:56.472967  399286 type.go:168] "Request Body" body=""
	I1206 10:45:56.473058  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:56.473382  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:56.473432  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:56.973296  399286 type.go:168] "Request Body" body=""
	I1206 10:45:56.973367  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:56.973664  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:57.473479  399286 type.go:168] "Request Body" body=""
	I1206 10:45:57.473548  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:57.473911  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:57.972639  399286 type.go:168] "Request Body" body=""
	I1206 10:45:57.972714  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:57.973013  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:58.472518  399286 type.go:168] "Request Body" body=""
	I1206 10:45:58.472589  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:58.472883  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:58.972593  399286 type.go:168] "Request Body" body=""
	I1206 10:45:58.972667  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:58.973050  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:58.973107  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:59.472758  399286 type.go:168] "Request Body" body=""
	I1206 10:45:59.472833  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:59.473125  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:59.972559  399286 type.go:168] "Request Body" body=""
	I1206 10:45:59.972680  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:59.973026  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:00.472759  399286 type.go:168] "Request Body" body=""
	I1206 10:46:00.472862  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:00.473243  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:00.972535  399286 type.go:168] "Request Body" body=""
	I1206 10:46:00.972611  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:00.972922  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:01.472611  399286 type.go:168] "Request Body" body=""
	I1206 10:46:01.472687  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:01.473449  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:01.473514  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:01.973455  399286 type.go:168] "Request Body" body=""
	I1206 10:46:01.973535  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:01.973907  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:02.472594  399286 type.go:168] "Request Body" body=""
	I1206 10:46:02.472664  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:02.472938  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:02.972637  399286 type.go:168] "Request Body" body=""
	I1206 10:46:02.972721  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:02.973106  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:03.472580  399286 type.go:168] "Request Body" body=""
	I1206 10:46:03.472660  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:03.473047  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:03.972706  399286 type.go:168] "Request Body" body=""
	I1206 10:46:03.972777  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:03.973074  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:03.973125  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:04.472781  399286 type.go:168] "Request Body" body=""
	I1206 10:46:04.472867  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:04.473199  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:04.972938  399286 type.go:168] "Request Body" body=""
	I1206 10:46:04.973022  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:04.973345  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:05.473133  399286 type.go:168] "Request Body" body=""
	I1206 10:46:05.473203  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:05.473463  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:05.973200  399286 type.go:168] "Request Body" body=""
	I1206 10:46:05.973298  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:05.973625  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:05.973682  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:06.472493  399286 type.go:168] "Request Body" body=""
	I1206 10:46:06.472614  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:06.473110  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:06.972852  399286 type.go:168] "Request Body" body=""
	I1206 10:46:06.972929  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:06.973194  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:07.472870  399286 type.go:168] "Request Body" body=""
	I1206 10:46:07.472948  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:07.473250  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:07.933511  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:46:07.973023  399286 type.go:168] "Request Body" body=""
	I1206 10:46:07.973096  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:07.973373  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:07.994514  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:46:07.994569  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:46:07.994603  399286 retry.go:31] will retry after 24.706041433s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:46:08.473166  399286 type.go:168] "Request Body" body=""
	I1206 10:46:08.473240  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:08.473542  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:08.473592  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:08.973437  399286 type.go:168] "Request Body" body=""
	I1206 10:46:08.973586  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:08.973915  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:09.472565  399286 type.go:168] "Request Body" body=""
	I1206 10:46:09.472644  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:09.472997  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:09.972519  399286 type.go:168] "Request Body" body=""
	I1206 10:46:09.972596  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:09.972890  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:10.472617  399286 type.go:168] "Request Body" body=""
	I1206 10:46:10.472695  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:10.473054  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:10.972589  399286 type.go:168] "Request Body" body=""
	I1206 10:46:10.972671  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:10.973007  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:10.973060  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:11.472630  399286 type.go:168] "Request Body" body=""
	I1206 10:46:11.472699  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:11.473093  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:11.972992  399286 type.go:168] "Request Body" body=""
	I1206 10:46:11.973065  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:11.973384  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:12.472990  399286 type.go:168] "Request Body" body=""
	I1206 10:46:12.473133  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:12.473477  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:12.973203  399286 type.go:168] "Request Body" body=""
	I1206 10:46:12.973288  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:12.973547  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:12.973597  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:13.473422  399286 type.go:168] "Request Body" body=""
	I1206 10:46:13.473507  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:13.473830  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:13.972530  399286 type.go:168] "Request Body" body=""
	I1206 10:46:13.972634  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:13.972982  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:14.472697  399286 type.go:168] "Request Body" body=""
	I1206 10:46:14.472780  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:14.473132  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:14.972546  399286 type.go:168] "Request Body" body=""
	I1206 10:46:14.972620  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:14.972954  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:15.472543  399286 type.go:168] "Request Body" body=""
	I1206 10:46:15.472620  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:15.472950  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:15.473077  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:15.972516  399286 type.go:168] "Request Body" body=""
	I1206 10:46:15.972589  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:15.972880  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:16.153289  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:46:16.211194  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:46:16.214959  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:46:16.214991  399286 retry.go:31] will retry after 16.737835039s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:46:16.473494  399286 type.go:168] "Request Body" body=""
	I1206 10:46:16.473573  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:16.473903  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:16.972909  399286 type.go:168] "Request Body" body=""
	I1206 10:46:16.972986  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:16.973336  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:17.473112  399286 type.go:168] "Request Body" body=""
	I1206 10:46:17.473189  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:17.473465  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:17.473508  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:17.973266  399286 type.go:168] "Request Body" body=""
	I1206 10:46:17.973344  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:17.973710  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:18.472499  399286 type.go:168] "Request Body" body=""
	I1206 10:46:18.472586  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:18.472953  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:18.972650  399286 type.go:168] "Request Body" body=""
	I1206 10:46:18.972719  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:18.973068  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:19.472565  399286 type.go:168] "Request Body" body=""
	I1206 10:46:19.472638  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:19.472948  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:19.972573  399286 type.go:168] "Request Body" body=""
	I1206 10:46:19.972649  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:19.972985  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:19.973044  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:20.472488  399286 type.go:168] "Request Body" body=""
	I1206 10:46:20.472560  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:20.472892  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:20.972569  399286 type.go:168] "Request Body" body=""
	I1206 10:46:20.972648  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:20.973000  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:21.472659  399286 type.go:168] "Request Body" body=""
	I1206 10:46:21.472741  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:21.473075  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:21.972911  399286 type.go:168] "Request Body" body=""
	I1206 10:46:21.972985  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:21.973292  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:21.973342  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:22.473039  399286 type.go:168] "Request Body" body=""
	I1206 10:46:22.473118  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:22.473451  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:22.973318  399286 type.go:168] "Request Body" body=""
	I1206 10:46:22.973392  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:22.973733  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:23.472438  399286 type.go:168] "Request Body" body=""
	I1206 10:46:23.472509  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:23.472819  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:23.972535  399286 type.go:168] "Request Body" body=""
	I1206 10:46:23.972611  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:23.972929  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:24.472539  399286 type.go:168] "Request Body" body=""
	I1206 10:46:24.472621  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:24.472971  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:24.473042  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:24.972529  399286 type.go:168] "Request Body" body=""
	I1206 10:46:24.972598  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:24.972883  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:25.472559  399286 type.go:168] "Request Body" body=""
	I1206 10:46:25.472685  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:25.473033  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:25.972748  399286 type.go:168] "Request Body" body=""
	I1206 10:46:25.972833  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:25.973182  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:26.472983  399286 type.go:168] "Request Body" body=""
	I1206 10:46:26.473065  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:26.473364  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:26.473432  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:26.973366  399286 type.go:168] "Request Body" body=""
	I1206 10:46:26.973451  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:26.973797  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:27.472534  399286 type.go:168] "Request Body" body=""
	I1206 10:46:27.472617  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:27.472962  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:27.972662  399286 type.go:168] "Request Body" body=""
	I1206 10:46:27.972735  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:27.973174  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:28.472540  399286 type.go:168] "Request Body" body=""
	I1206 10:46:28.472653  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:28.472975  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:28.972588  399286 type.go:168] "Request Body" body=""
	I1206 10:46:28.972684  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:28.973062  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:28.973117  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:29.472612  399286 type.go:168] "Request Body" body=""
	I1206 10:46:29.472691  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:29.473027  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:29.972588  399286 type.go:168] "Request Body" body=""
	I1206 10:46:29.972665  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:29.973045  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:30.472639  399286 type.go:168] "Request Body" body=""
	I1206 10:46:30.472714  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:30.473023  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:30.972504  399286 type.go:168] "Request Body" body=""
	I1206 10:46:30.972575  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:30.972851  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:31.472534  399286 type.go:168] "Request Body" body=""
	I1206 10:46:31.472619  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:31.472983  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:31.473045  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:31.973254  399286 type.go:168] "Request Body" body=""
	I1206 10:46:31.973345  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:31.973713  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:32.472451  399286 type.go:168] "Request Body" body=""
	I1206 10:46:32.472529  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:32.472822  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:32.701368  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:46:32.764470  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:46:32.764520  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:46:32.764620  399286 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 10:46:32.953898  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:46:32.973480  399286 type.go:168] "Request Body" body=""
	I1206 10:46:32.973551  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:32.973819  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:33.013712  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:46:33.017430  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:46:33.017463  399286 retry.go:31] will retry after 30.205234164s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:46:33.472638  399286 type.go:168] "Request Body" body=""
	I1206 10:46:33.472723  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:33.473069  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:33.473124  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:33.972761  399286 type.go:168] "Request Body" body=""
	I1206 10:46:33.972847  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:33.973196  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:34.472595  399286 type.go:168] "Request Body" body=""
	I1206 10:46:34.472673  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:34.473015  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:34.972624  399286 type.go:168] "Request Body" body=""
	I1206 10:46:34.972712  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:34.973072  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:35.472562  399286 type.go:168] "Request Body" body=""
	I1206 10:46:35.472631  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:35.472898  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:35.972594  399286 type.go:168] "Request Body" body=""
	I1206 10:46:35.972691  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:35.973118  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:35.973178  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:36.472537  399286 type.go:168] "Request Body" body=""
	I1206 10:46:36.472612  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:36.472993  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:36.972887  399286 type.go:168] "Request Body" body=""
	I1206 10:46:36.972979  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:36.973257  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:37.472570  399286 type.go:168] "Request Body" body=""
	I1206 10:46:37.472647  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:37.473006  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:37.972720  399286 type.go:168] "Request Body" body=""
	I1206 10:46:37.972795  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:37.973159  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:37.973230  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:38.472827  399286 type.go:168] "Request Body" body=""
	I1206 10:46:38.472914  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:38.473245  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:38.972991  399286 type.go:168] "Request Body" body=""
	I1206 10:46:38.973109  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:38.973430  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:39.473083  399286 type.go:168] "Request Body" body=""
	I1206 10:46:39.473156  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:39.473487  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:39.973120  399286 type.go:168] "Request Body" body=""
	I1206 10:46:39.973195  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:39.973475  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:39.973520  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:40.473349  399286 type.go:168] "Request Body" body=""
	I1206 10:46:40.473426  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:40.473832  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:40.972536  399286 type.go:168] "Request Body" body=""
	I1206 10:46:40.972611  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:40.972967  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:41.472528  399286 type.go:168] "Request Body" body=""
	I1206 10:46:41.472606  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:41.472897  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:41.972839  399286 type.go:168] "Request Body" body=""
	I1206 10:46:41.972921  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:41.973277  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:42.473121  399286 type.go:168] "Request Body" body=""
	I1206 10:46:42.473205  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:42.473535  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:42.473598  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:42.973295  399286 type.go:168] "Request Body" body=""
	I1206 10:46:42.973369  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:42.973633  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:43.474561  399286 type.go:168] "Request Body" body=""
	I1206 10:46:43.474632  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:43.474989  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:43.972751  399286 type.go:168] "Request Body" body=""
	I1206 10:46:43.972830  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:43.973164  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:44.472525  399286 type.go:168] "Request Body" body=""
	I1206 10:46:44.472603  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:44.472924  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:44.972616  399286 type.go:168] "Request Body" body=""
	I1206 10:46:44.972690  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:44.972993  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:44.973041  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:45.472572  399286 type.go:168] "Request Body" body=""
	I1206 10:46:45.472654  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:45.473032  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:45.972746  399286 type.go:168] "Request Body" body=""
	I1206 10:46:45.972818  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:45.973081  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:46.473093  399286 type.go:168] "Request Body" body=""
	I1206 10:46:46.473169  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:46.473502  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:46.972476  399286 type.go:168] "Request Body" body=""
	I1206 10:46:46.972548  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:46.972884  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:47.472504  399286 type.go:168] "Request Body" body=""
	I1206 10:46:47.472574  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:47.472853  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:47.472901  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:47.972549  399286 type.go:168] "Request Body" body=""
	I1206 10:46:47.972622  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:47.972948  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:48.472667  399286 type.go:168] "Request Body" body=""
	I1206 10:46:48.472745  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:48.473110  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:48.972503  399286 type.go:168] "Request Body" body=""
	I1206 10:46:48.972577  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:48.972841  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:49.472552  399286 type.go:168] "Request Body" body=""
	I1206 10:46:49.472628  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:49.472955  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:49.473012  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:49.972575  399286 type.go:168] "Request Body" body=""
	I1206 10:46:49.972653  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:49.972977  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:50.472510  399286 type.go:168] "Request Body" body=""
	I1206 10:46:50.472585  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:50.472943  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:50.972627  399286 type.go:168] "Request Body" body=""
	I1206 10:46:50.972741  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:50.973101  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:51.472815  399286 type.go:168] "Request Body" body=""
	I1206 10:46:51.472914  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:51.473280  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:51.473354  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:51.973315  399286 type.go:168] "Request Body" body=""
	I1206 10:46:51.973390  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:51.973667  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:52.473496  399286 type.go:168] "Request Body" body=""
	I1206 10:46:52.473597  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:52.473928  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:52.972621  399286 type.go:168] "Request Body" body=""
	I1206 10:46:52.972697  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:52.973027  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:53.472511  399286 type.go:168] "Request Body" body=""
	I1206 10:46:53.472581  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:53.472850  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:53.972553  399286 type.go:168] "Request Body" body=""
	I1206 10:46:53.972631  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:53.973006  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:53.973079  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:54.472752  399286 type.go:168] "Request Body" body=""
	I1206 10:46:54.472832  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:54.473199  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:54.972887  399286 type.go:168] "Request Body" body=""
	I1206 10:46:54.972975  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:54.973260  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:55.472573  399286 type.go:168] "Request Body" body=""
	I1206 10:46:55.472649  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:55.473014  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:55.972568  399286 type.go:168] "Request Body" body=""
	I1206 10:46:55.972665  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:55.973053  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:55.973130  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:56.472795  399286 type.go:168] "Request Body" body=""
	I1206 10:46:56.472877  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:56.473146  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:56.972884  399286 type.go:168] "Request Body" body=""
	I1206 10:46:56.972967  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:56.973286  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:57.473158  399286 type.go:168] "Request Body" body=""
	I1206 10:46:57.473263  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:57.473709  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:57.973446  399286 type.go:168] "Request Body" body=""
	I1206 10:46:57.973526  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:57.973793  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:57.973842  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:58.472543  399286 type.go:168] "Request Body" body=""
	I1206 10:46:58.472635  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:58.473049  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:58.972820  399286 type.go:168] "Request Body" body=""
	I1206 10:46:58.972904  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:58.973235  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:59.472499  399286 type.go:168] "Request Body" body=""
	I1206 10:46:59.472588  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:59.472924  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:59.972544  399286 type.go:168] "Request Body" body=""
	I1206 10:46:59.972618  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:59.972934  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:00.472668  399286 type.go:168] "Request Body" body=""
	I1206 10:47:00.472777  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:00.473113  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:00.473167  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:00.972603  399286 type.go:168] "Request Body" body=""
	I1206 10:47:00.972685  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:00.973038  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:01.472631  399286 type.go:168] "Request Body" body=""
	I1206 10:47:01.472707  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:01.473290  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:01.973256  399286 type.go:168] "Request Body" body=""
	I1206 10:47:01.973331  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:01.973589  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:02.473484  399286 type.go:168] "Request Body" body=""
	I1206 10:47:02.473561  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:02.473896  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:02.473948  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:02.972573  399286 type.go:168] "Request Body" body=""
	I1206 10:47:02.972648  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:02.972960  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:03.223490  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:47:03.285940  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:47:03.285995  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:47:03.286078  399286 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 10:47:03.289299  399286 out.go:179] * Enabled addons: 
	I1206 10:47:03.293166  399286 addons.go:530] duration metric: took 1m56.818196786s for enable addons: enabled=[]
	I1206 10:47:03.473269  399286 type.go:168] "Request Body" body=""
	I1206 10:47:03.473338  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:03.473598  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:03.973428  399286 type.go:168] "Request Body" body=""
	I1206 10:47:03.973501  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:03.973827  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:04.472627  399286 type.go:168] "Request Body" body=""
	I1206 10:47:04.472722  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:04.473116  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:04.972512  399286 type.go:168] "Request Body" body=""
	I1206 10:47:04.972582  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:04.972864  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:04.972905  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:05.472854  399286 type.go:168] "Request Body" body=""
	I1206 10:47:05.472960  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:05.473416  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:05.973522  399286 type.go:168] "Request Body" body=""
	I1206 10:47:05.973609  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:05.973972  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:06.472918  399286 type.go:168] "Request Body" body=""
	I1206 10:47:06.472988  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:06.473277  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:06.973211  399286 type.go:168] "Request Body" body=""
	I1206 10:47:06.973295  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:06.973657  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:06.973714  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:07.473520  399286 type.go:168] "Request Body" body=""
	I1206 10:47:07.473603  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:07.473953  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:07.972629  399286 type.go:168] "Request Body" body=""
	I1206 10:47:07.972706  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:07.973057  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:08.472555  399286 type.go:168] "Request Body" body=""
	I1206 10:47:08.472632  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:08.472984  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:08.972724  399286 type.go:168] "Request Body" body=""
	I1206 10:47:08.972820  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:08.973196  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:09.472563  399286 type.go:168] "Request Body" body=""
	I1206 10:47:09.472642  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:09.472956  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:09.473017  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:09.972573  399286 type.go:168] "Request Body" body=""
	I1206 10:47:09.972651  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:09.973014  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:10.472724  399286 type.go:168] "Request Body" body=""
	I1206 10:47:10.472800  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:10.473133  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:10.972508  399286 type.go:168] "Request Body" body=""
	I1206 10:47:10.972579  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:10.972853  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:11.472590  399286 type.go:168] "Request Body" body=""
	I1206 10:47:11.472669  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:11.473026  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:11.473100  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:11.972903  399286 type.go:168] "Request Body" body=""
	I1206 10:47:11.972976  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:11.973268  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:12.472510  399286 type.go:168] "Request Body" body=""
	I1206 10:47:12.472587  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:12.472925  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:12.972540  399286 type.go:168] "Request Body" body=""
	I1206 10:47:12.972620  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:12.972953  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:13.472625  399286 type.go:168] "Request Body" body=""
	I1206 10:47:13.472698  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:13.473024  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:13.972681  399286 type.go:168] "Request Body" body=""
	I1206 10:47:13.972766  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:13.973081  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:13.973132  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:14.472631  399286 type.go:168] "Request Body" body=""
	I1206 10:47:14.472714  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:14.472985  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:14.972530  399286 type.go:168] "Request Body" body=""
	I1206 10:47:14.972629  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:14.972947  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:15.472648  399286 type.go:168] "Request Body" body=""
	I1206 10:47:15.472724  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:15.472994  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:15.972549  399286 type.go:168] "Request Body" body=""
	I1206 10:47:15.972625  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:15.972986  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:16.472737  399286 type.go:168] "Request Body" body=""
	I1206 10:47:16.472818  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:16.473143  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:16.473210  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:16.972875  399286 type.go:168] "Request Body" body=""
	I1206 10:47:16.972953  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:16.973285  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:17.473095  399286 type.go:168] "Request Body" body=""
	I1206 10:47:17.473172  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:17.473522  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:17.973337  399286 type.go:168] "Request Body" body=""
	I1206 10:47:17.973427  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:17.973777  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:18.473404  399286 type.go:168] "Request Body" body=""
	I1206 10:47:18.473476  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:18.473741  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:18.473792  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:18.972507  399286 type.go:168] "Request Body" body=""
	I1206 10:47:18.972607  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:18.972936  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:19.472645  399286 type.go:168] "Request Body" body=""
	I1206 10:47:19.472722  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:19.473093  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:19.972508  399286 type.go:168] "Request Body" body=""
	I1206 10:47:19.972581  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:19.972852  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:20.472581  399286 type.go:168] "Request Body" body=""
	I1206 10:47:20.472666  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:20.472982  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:20.972583  399286 type.go:168] "Request Body" body=""
	I1206 10:47:20.972663  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:20.973010  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:20.973076  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:21.472734  399286 type.go:168] "Request Body" body=""
	I1206 10:47:21.472809  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:21.473085  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:21.972896  399286 type.go:168] "Request Body" body=""
	I1206 10:47:21.972994  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:21.973342  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:22.473130  399286 type.go:168] "Request Body" body=""
	I1206 10:47:22.473211  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:22.473543  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:22.973319  399286 type.go:168] "Request Body" body=""
	I1206 10:47:22.973388  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:22.973646  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:22.973687  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:23.473432  399286 type.go:168] "Request Body" body=""
	I1206 10:47:23.473517  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:23.473913  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:23.972485  399286 type.go:168] "Request Body" body=""
	I1206 10:47:23.972564  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:23.972906  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:24.472560  399286 type.go:168] "Request Body" body=""
	I1206 10:47:24.472635  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:24.472973  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:24.972571  399286 type.go:168] "Request Body" body=""
	I1206 10:47:24.972646  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:24.973006  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:25.472733  399286 type.go:168] "Request Body" body=""
	I1206 10:47:25.472817  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:25.473171  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:25.473229  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:25.972533  399286 type.go:168] "Request Body" body=""
	I1206 10:47:25.972606  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:25.972871  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:26.472886  399286 type.go:168] "Request Body" body=""
	I1206 10:47:26.472960  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:26.473323  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:26.972934  399286 type.go:168] "Request Body" body=""
	I1206 10:47:26.973008  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:26.973356  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:27.473095  399286 type.go:168] "Request Body" body=""
	I1206 10:47:27.473172  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:27.473531  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:27.473599  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:27.973360  399286 type.go:168] "Request Body" body=""
	I1206 10:47:27.973441  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:27.973782  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:28.472538  399286 type.go:168] "Request Body" body=""
	I1206 10:47:28.472619  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:28.472967  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:28.972640  399286 type.go:168] "Request Body" body=""
	I1206 10:47:28.972710  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:28.973005  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:29.472569  399286 type.go:168] "Request Body" body=""
	I1206 10:47:29.472650  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:29.472987  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:29.972580  399286 type.go:168] "Request Body" body=""
	I1206 10:47:29.972672  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:29.973033  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:29.973089  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:30.472734  399286 type.go:168] "Request Body" body=""
	I1206 10:47:30.472805  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:30.473130  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:30.972629  399286 type.go:168] "Request Body" body=""
	I1206 10:47:30.972708  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:30.973010  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:31.472578  399286 type.go:168] "Request Body" body=""
	I1206 10:47:31.472654  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:31.472987  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:31.972822  399286 type.go:168] "Request Body" body=""
	I1206 10:47:31.972897  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:31.973160  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:31.973200  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:32.472914  399286 type.go:168] "Request Body" body=""
	I1206 10:47:32.473004  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:32.473347  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:32.973159  399286 type.go:168] "Request Body" body=""
	I1206 10:47:32.973233  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:32.973581  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:33.473360  399286 type.go:168] "Request Body" body=""
	I1206 10:47:33.473433  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:33.473718  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:33.973498  399286 type.go:168] "Request Body" body=""
	I1206 10:47:33.973577  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:33.973949  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:33.974029  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:34.472526  399286 type.go:168] "Request Body" body=""
	I1206 10:47:34.472608  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:34.472947  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:34.972533  399286 type.go:168] "Request Body" body=""
	I1206 10:47:34.972628  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:34.972989  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:35.472565  399286 type.go:168] "Request Body" body=""
	I1206 10:47:35.472646  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:35.473023  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:35.972740  399286 type.go:168] "Request Body" body=""
	I1206 10:47:35.972818  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:35.973158  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:36.473170  399286 type.go:168] "Request Body" body=""
	I1206 10:47:36.473254  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:36.473528  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:36.473569  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:36.972525  399286 type.go:168] "Request Body" body=""
	I1206 10:47:36.972602  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:36.972938  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:37.472576  399286 type.go:168] "Request Body" body=""
	I1206 10:47:37.472656  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:37.473000  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:37.972555  399286 type.go:168] "Request Body" body=""
	I1206 10:47:37.972627  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:37.972895  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:38.472579  399286 type.go:168] "Request Body" body=""
	I1206 10:47:38.472663  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:38.473008  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:38.972569  399286 type.go:168] "Request Body" body=""
	I1206 10:47:38.972649  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:38.973012  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:38.973070  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:39.472526  399286 type.go:168] "Request Body" body=""
	I1206 10:47:39.472602  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:39.472864  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:39.972558  399286 type.go:168] "Request Body" body=""
	I1206 10:47:39.972636  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:39.972965  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:40.472567  399286 type.go:168] "Request Body" body=""
	I1206 10:47:40.472639  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:40.472972  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:40.972512  399286 type.go:168] "Request Body" body=""
	I1206 10:47:40.972588  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:40.972883  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:41.472541  399286 type.go:168] "Request Body" body=""
	I1206 10:47:41.472626  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:41.472980  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:41.473038  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:41.972863  399286 type.go:168] "Request Body" body=""
	I1206 10:47:41.972939  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:41.973307  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:42.472600  399286 type.go:168] "Request Body" body=""
	I1206 10:47:42.472680  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:42.472974  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:42.972681  399286 type.go:168] "Request Body" body=""
	I1206 10:47:42.972759  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:42.973100  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:43.472601  399286 type.go:168] "Request Body" body=""
	I1206 10:47:43.472694  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:43.473056  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:43.473116  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:43.972500  399286 type.go:168] "Request Body" body=""
	I1206 10:47:43.972579  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:43.972899  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:44.472587  399286 type.go:168] "Request Body" body=""
	I1206 10:47:44.472675  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:44.473014  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:44.972574  399286 type.go:168] "Request Body" body=""
	I1206 10:47:44.972651  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:44.973031  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:45.472655  399286 type.go:168] "Request Body" body=""
	I1206 10:47:45.472726  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:45.473026  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:45.972718  399286 type.go:168] "Request Body" body=""
	I1206 10:47:45.972800  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:45.973152  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:45.973210  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:46.472509  399286 type.go:168] "Request Body" body=""
	I1206 10:47:46.472600  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:46.472959  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:46.972791  399286 type.go:168] "Request Body" body=""
	I1206 10:47:46.972860  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:46.973128  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:47.472798  399286 type.go:168] "Request Body" body=""
	I1206 10:47:47.472874  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:47.473208  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:47.972568  399286 type.go:168] "Request Body" body=""
	I1206 10:47:47.972648  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:47.972980  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:48.473410  399286 type.go:168] "Request Body" body=""
	I1206 10:47:48.473482  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:48.473747  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:48.473789  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:48.973486  399286 type.go:168] "Request Body" body=""
	I1206 10:47:48.973564  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:48.973890  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:49.472605  399286 type.go:168] "Request Body" body=""
	I1206 10:47:49.472724  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:49.473137  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:49.972518  399286 type.go:168] "Request Body" body=""
	I1206 10:47:49.972592  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:49.972867  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:50.472548  399286 type.go:168] "Request Body" body=""
	I1206 10:47:50.472628  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:50.472960  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:50.972581  399286 type.go:168] "Request Body" body=""
	I1206 10:47:50.972656  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:50.972999  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:50.973058  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:51.472529  399286 type.go:168] "Request Body" body=""
	I1206 10:47:51.472601  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:51.472873  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:51.972855  399286 type.go:168] "Request Body" body=""
	I1206 10:47:51.972934  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:51.973251  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:52.472527  399286 type.go:168] "Request Body" body=""
	I1206 10:47:52.472603  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:52.472925  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:52.972632  399286 type.go:168] "Request Body" body=""
	I1206 10:47:52.972710  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:52.973009  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:53.472551  399286 type.go:168] "Request Body" body=""
	I1206 10:47:53.472635  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:53.473004  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:53.473083  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:53.972778  399286 type.go:168] "Request Body" body=""
	I1206 10:47:53.972868  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:53.973278  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:54.472595  399286 type.go:168] "Request Body" body=""
	I1206 10:47:54.472680  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:54.473008  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:54.972544  399286 type.go:168] "Request Body" body=""
	I1206 10:47:54.972624  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:54.972997  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:55.472544  399286 type.go:168] "Request Body" body=""
	I1206 10:47:55.472633  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:55.472967  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:55.972686  399286 type.go:168] "Request Body" body=""
	I1206 10:47:55.972759  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:55.973084  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:55.973129  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:56.472527  399286 type.go:168] "Request Body" body=""
	I1206 10:47:56.472600  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:56.472935  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:56.972607  399286 type.go:168] "Request Body" body=""
	I1206 10:47:56.972688  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:56.973052  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:57.472495  399286 type.go:168] "Request Body" body=""
	I1206 10:47:57.472571  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:57.472885  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:57.972579  399286 type.go:168] "Request Body" body=""
	I1206 10:47:57.972653  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:57.972989  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:58.472576  399286 type.go:168] "Request Body" body=""
	I1206 10:47:58.472654  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:58.472981  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:58.473038  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:58.972524  399286 type.go:168] "Request Body" body=""
	I1206 10:47:58.972595  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:58.972920  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:59.472623  399286 type.go:168] "Request Body" body=""
	I1206 10:47:59.472702  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:59.473058  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:59.972774  399286 type.go:168] "Request Body" body=""
	I1206 10:47:59.972856  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:59.973198  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:00.472879  399286 type.go:168] "Request Body" body=""
	I1206 10:48:00.472963  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:00.473302  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:00.473350  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:00.973100  399286 type.go:168] "Request Body" body=""
	I1206 10:48:00.973182  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:00.973500  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:01.473348  399286 type.go:168] "Request Body" body=""
	I1206 10:48:01.473426  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:01.473749  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:01.972487  399286 type.go:168] "Request Body" body=""
	I1206 10:48:01.972565  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:01.972839  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:02.472524  399286 type.go:168] "Request Body" body=""
	I1206 10:48:02.472604  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:02.472916  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:02.972566  399286 type.go:168] "Request Body" body=""
	I1206 10:48:02.972640  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:02.972945  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:02.972990  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:03.472551  399286 type.go:168] "Request Body" body=""
	I1206 10:48:03.472641  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:03.472970  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:03.972530  399286 type.go:168] "Request Body" body=""
	I1206 10:48:03.972607  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:03.972945  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:04.472651  399286 type.go:168] "Request Body" body=""
	I1206 10:48:04.472730  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:04.473079  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:04.972502  399286 type.go:168] "Request Body" body=""
	I1206 10:48:04.972576  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:04.972860  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:05.472568  399286 type.go:168] "Request Body" body=""
	I1206 10:48:05.472646  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:05.473022  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:05.473077  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:05.972744  399286 type.go:168] "Request Body" body=""
	I1206 10:48:05.972834  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:05.973199  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:06.473241  399286 type.go:168] "Request Body" body=""
	I1206 10:48:06.473315  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:06.473604  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:06.972611  399286 type.go:168] "Request Body" body=""
	I1206 10:48:06.972691  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:06.972992  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:07.472569  399286 type.go:168] "Request Body" body=""
	I1206 10:48:07.472658  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:07.473030  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:07.972580  399286 type.go:168] "Request Body" body=""
	I1206 10:48:07.972659  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:07.972925  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:07.972966  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:08.472573  399286 type.go:168] "Request Body" body=""
	I1206 10:48:08.472665  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:08.472999  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:08.972712  399286 type.go:168] "Request Body" body=""
	I1206 10:48:08.972805  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:08.973106  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:09.472510  399286 type.go:168] "Request Body" body=""
	I1206 10:48:09.472584  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:09.472910  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:09.972623  399286 type.go:168] "Request Body" body=""
	I1206 10:48:09.972697  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:09.973067  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:09.973119  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:10.472815  399286 type.go:168] "Request Body" body=""
	I1206 10:48:10.472886  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:10.473224  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:10.972551  399286 type.go:168] "Request Body" body=""
	I1206 10:48:10.972627  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:10.972947  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:11.472597  399286 type.go:168] "Request Body" body=""
	I1206 10:48:11.472675  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:11.473029  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:11.972904  399286 type.go:168] "Request Body" body=""
	I1206 10:48:11.972981  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:11.973328  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:11.973383  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:12.472840  399286 type.go:168] "Request Body" body=""
	I1206 10:48:12.472917  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:12.473212  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:12.972545  399286 type.go:168] "Request Body" body=""
	I1206 10:48:12.972620  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:12.972959  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:13.472663  399286 type.go:168] "Request Body" body=""
	I1206 10:48:13.472740  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:13.473115  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:13.972809  399286 type.go:168] "Request Body" body=""
	I1206 10:48:13.972882  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:13.973148  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:14.472560  399286 type.go:168] "Request Body" body=""
	I1206 10:48:14.472633  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:14.472974  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:14.473026  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:14.972547  399286 type.go:168] "Request Body" body=""
	I1206 10:48:14.972633  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:14.972981  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:15.472494  399286 type.go:168] "Request Body" body=""
	I1206 10:48:15.472572  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:15.472888  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:15.972557  399286 type.go:168] "Request Body" body=""
	I1206 10:48:15.972632  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:15.973009  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:16.472796  399286 type.go:168] "Request Body" body=""
	I1206 10:48:16.472875  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:16.473235  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:16.473293  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:16.972964  399286 type.go:168] "Request Body" body=""
	I1206 10:48:16.973036  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:16.973307  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:17.473074  399286 type.go:168] "Request Body" body=""
	I1206 10:48:17.473147  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:17.473485  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:17.973295  399286 type.go:168] "Request Body" body=""
	I1206 10:48:17.973378  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:17.973725  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:18.473438  399286 type.go:168] "Request Body" body=""
	I1206 10:48:18.473505  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:18.473841  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:18.473920  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:18.972621  399286 type.go:168] "Request Body" body=""
	I1206 10:48:18.972695  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:18.973065  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:19.472598  399286 type.go:168] "Request Body" body=""
	I1206 10:48:19.472705  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:19.473114  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:19.972515  399286 type.go:168] "Request Body" body=""
	I1206 10:48:19.972585  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:19.972856  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:20.472548  399286 type.go:168] "Request Body" body=""
	I1206 10:48:20.472625  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:20.472958  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:20.972576  399286 type.go:168] "Request Body" body=""
	I1206 10:48:20.972660  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:20.973023  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:20.973083  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:21.472615  399286 type.go:168] "Request Body" body=""
	I1206 10:48:21.472686  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:21.472963  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:21.972979  399286 type.go:168] "Request Body" body=""
	I1206 10:48:21.973061  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:21.973404  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:22.473200  399286 type.go:168] "Request Body" body=""
	I1206 10:48:22.473283  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:22.473635  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:22.973360  399286 type.go:168] "Request Body" body=""
	I1206 10:48:22.973441  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:22.973782  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:22.973843  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:23.472499  399286 type.go:168] "Request Body" body=""
	I1206 10:48:23.472580  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:23.472916  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:23.972556  399286 type.go:168] "Request Body" body=""
	I1206 10:48:23.972636  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:23.972975  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:24.472447  399286 type.go:168] "Request Body" body=""
	I1206 10:48:24.472514  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:24.472774  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:24.972471  399286 type.go:168] "Request Body" body=""
	I1206 10:48:24.972545  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:24.972884  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:25.472578  399286 type.go:168] "Request Body" body=""
	I1206 10:48:25.472667  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:25.473021  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:25.473076  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:25.972593  399286 type.go:168] "Request Body" body=""
	I1206 10:48:25.972669  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:25.972945  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:26.472488  399286 type.go:168] "Request Body" body=""
	I1206 10:48:26.472562  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:26.472906  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:26.972576  399286 type.go:168] "Request Body" body=""
	I1206 10:48:26.972660  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:26.973014  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:27.472549  399286 type.go:168] "Request Body" body=""
	I1206 10:48:27.472616  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:27.472894  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:27.972571  399286 type.go:168] "Request Body" body=""
	I1206 10:48:27.972663  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:27.973010  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:27.973066  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:28.472755  399286 type.go:168] "Request Body" body=""
	I1206 10:48:28.472833  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:28.473174  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:28.972507  399286 type.go:168] "Request Body" body=""
	I1206 10:48:28.972584  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:28.972913  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:29.472601  399286 type.go:168] "Request Body" body=""
	I1206 10:48:29.472680  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:29.473059  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:29.972567  399286 type.go:168] "Request Body" body=""
	I1206 10:48:29.972642  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:29.972943  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:30.472524  399286 type.go:168] "Request Body" body=""
	I1206 10:48:30.472616  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:30.472907  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:30.472969  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:30.972572  399286 type.go:168] "Request Body" body=""
	I1206 10:48:30.972656  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:30.973011  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:31.472727  399286 type.go:168] "Request Body" body=""
	I1206 10:48:31.472811  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:31.473170  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:31.972852  399286 type.go:168] "Request Body" body=""
	I1206 10:48:31.972934  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:31.973257  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:32.472954  399286 type.go:168] "Request Body" body=""
	I1206 10:48:32.473032  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:32.473401  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:32.473457  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:32.973252  399286 type.go:168] "Request Body" body=""
	I1206 10:48:32.973327  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:32.973655  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:33.473425  399286 type.go:168] "Request Body" body=""
	I1206 10:48:33.473493  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:33.473760  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:33.972474  399286 type.go:168] "Request Body" body=""
	I1206 10:48:33.972572  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:33.972878  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:34.472627  399286 type.go:168] "Request Body" body=""
	I1206 10:48:34.472747  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:34.473114  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:34.972774  399286 type.go:168] "Request Body" body=""
	I1206 10:48:34.972854  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:34.973228  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:34.973282  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:35.472597  399286 type.go:168] "Request Body" body=""
	I1206 10:48:35.472674  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:35.473044  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:35.972740  399286 type.go:168] "Request Body" body=""
	I1206 10:48:35.972817  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:35.973175  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:36.473166  399286 type.go:168] "Request Body" body=""
	I1206 10:48:36.473240  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:36.473506  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:36.972504  399286 type.go:168] "Request Body" body=""
	I1206 10:48:36.972595  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:36.972967  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:37.472704  399286 type.go:168] "Request Body" body=""
	I1206 10:48:37.472781  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:37.473151  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:37.473210  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:37.972584  399286 type.go:168] "Request Body" body=""
	I1206 10:48:37.972659  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:37.972980  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:38.472674  399286 type.go:168] "Request Body" body=""
	I1206 10:48:38.472751  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:38.473096  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:38.972809  399286 type.go:168] "Request Body" body=""
	I1206 10:48:38.972894  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:38.973234  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:39.472512  399286 type.go:168] "Request Body" body=""
	I1206 10:48:39.472591  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:39.472888  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:39.972573  399286 type.go:168] "Request Body" body=""
	I1206 10:48:39.972654  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:39.973005  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:39.973062  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:40.472729  399286 type.go:168] "Request Body" body=""
	I1206 10:48:40.472807  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:40.473118  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:40.972789  399286 type.go:168] "Request Body" body=""
	I1206 10:48:40.972865  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:40.973175  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:41.472555  399286 type.go:168] "Request Body" body=""
	I1206 10:48:41.472632  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:41.473019  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:41.972805  399286 type.go:168] "Request Body" body=""
	I1206 10:48:41.972889  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:41.973231  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:41.973283  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:42.472645  399286 type.go:168] "Request Body" body=""
	I1206 10:48:42.472724  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:42.473014  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:42.972545  399286 type.go:168] "Request Body" body=""
	I1206 10:48:42.972620  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:42.972971  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:43.472630  399286 type.go:168] "Request Body" body=""
	I1206 10:48:43.472707  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:43.473070  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:43.972757  399286 type.go:168] "Request Body" body=""
	I1206 10:48:43.972837  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:43.973118  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:44.472549  399286 type.go:168] "Request Body" body=""
	I1206 10:48:44.472623  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:44.472965  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:44.473024  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:44.972697  399286 type.go:168] "Request Body" body=""
	I1206 10:48:44.972778  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:44.973152  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:45.472603  399286 type.go:168] "Request Body" body=""
	I1206 10:48:45.472680  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:45.472954  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:45.972650  399286 type.go:168] "Request Body" body=""
	I1206 10:48:45.972731  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:45.973088  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:46.472844  399286 type.go:168] "Request Body" body=""
	I1206 10:48:46.472930  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:46.473240  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:46.473286  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:46.972876  399286 type.go:168] "Request Body" body=""
	I1206 10:48:46.972956  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:46.973229  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:47.472914  399286 type.go:168] "Request Body" body=""
	I1206 10:48:47.472997  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:47.473351  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:47.973159  399286 type.go:168] "Request Body" body=""
	I1206 10:48:47.973238  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:47.973572  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:48.473300  399286 type.go:168] "Request Body" body=""
	I1206 10:48:48.473369  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:48.473631  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:48.473676  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:48.973423  399286 type.go:168] "Request Body" body=""
	I1206 10:48:48.973495  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:48.973838  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:49.472583  399286 type.go:168] "Request Body" body=""
	I1206 10:48:49.472657  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:49.472982  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:49.972501  399286 type.go:168] "Request Body" body=""
	I1206 10:48:49.972577  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:49.972905  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:50.472592  399286 type.go:168] "Request Body" body=""
	I1206 10:48:50.472663  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:50.473004  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:50.972730  399286 type.go:168] "Request Body" body=""
	I1206 10:48:50.972813  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:50.973202  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:50.973262  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:51.472848  399286 type.go:168] "Request Body" body=""
	I1206 10:48:51.472917  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:51.473221  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:51.973010  399286 type.go:168] "Request Body" body=""
	I1206 10:48:51.973086  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:51.973413  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:52.473222  399286 type.go:168] "Request Body" body=""
	I1206 10:48:52.473300  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:52.473671  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:52.973437  399286 type.go:168] "Request Body" body=""
	I1206 10:48:52.973506  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:52.973775  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:52.973815  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:53.472497  399286 type.go:168] "Request Body" body=""
	I1206 10:48:53.472572  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:53.472897  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:53.972574  399286 type.go:168] "Request Body" body=""
	I1206 10:48:53.972662  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:53.973051  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:54.472686  399286 type.go:168] "Request Body" body=""
	I1206 10:48:54.472759  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:54.473019  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:54.972701  399286 type.go:168] "Request Body" body=""
	I1206 10:48:54.972840  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:54.973196  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:55.472921  399286 type.go:168] "Request Body" body=""
	I1206 10:48:55.473005  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:55.473348  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:55.473405  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:55.973139  399286 type.go:168] "Request Body" body=""
	I1206 10:48:55.973209  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:55.973524  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:56.473496  399286 type.go:168] "Request Body" body=""
	I1206 10:48:56.473586  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:56.473930  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:56.973058  399286 type.go:168] "Request Body" body=""
	I1206 10:48:56.973167  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:56.973523  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:57.473253  399286 type.go:168] "Request Body" body=""
	I1206 10:48:57.473322  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:57.473613  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:57.473655  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:57.973400  399286 type.go:168] "Request Body" body=""
	I1206 10:48:57.973472  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:57.973805  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:58.472543  399286 type.go:168] "Request Body" body=""
	I1206 10:48:58.472624  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:58.472965  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:58.972545  399286 type.go:168] "Request Body" body=""
	I1206 10:48:58.972612  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:58.972871  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:59.472552  399286 type.go:168] "Request Body" body=""
	I1206 10:48:59.472628  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:59.472962  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:59.972576  399286 type.go:168] "Request Body" body=""
	I1206 10:48:59.972655  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:59.973033  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:59.973088  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:00.472755  399286 type.go:168] "Request Body" body=""
	I1206 10:49:00.472825  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:00.473159  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:00.972543  399286 type.go:168] "Request Body" body=""
	I1206 10:49:00.972623  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:00.972982  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:01.472701  399286 type.go:168] "Request Body" body=""
	I1206 10:49:01.472779  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:01.473107  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:01.972890  399286 type.go:168] "Request Body" body=""
	I1206 10:49:01.972966  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:01.973305  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:01.973365  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:02.473122  399286 type.go:168] "Request Body" body=""
	I1206 10:49:02.473195  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:02.473527  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:02.973299  399286 type.go:168] "Request Body" body=""
	I1206 10:49:02.973372  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:02.973717  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:03.473484  399286 type.go:168] "Request Body" body=""
	I1206 10:49:03.473561  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:03.473909  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:03.972610  399286 type.go:168] "Request Body" body=""
	I1206 10:49:03.972692  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:03.972999  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:04.472572  399286 type.go:168] "Request Body" body=""
	I1206 10:49:04.472655  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:04.473009  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:04.473069  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:04.972630  399286 type.go:168] "Request Body" body=""
	I1206 10:49:04.972705  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:04.973016  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:05.472726  399286 type.go:168] "Request Body" body=""
	I1206 10:49:05.472816  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:05.473184  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:05.972911  399286 type.go:168] "Request Body" body=""
	I1206 10:49:05.972991  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:05.973382  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:06.473278  399286 type.go:168] "Request Body" body=""
	I1206 10:49:06.473354  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:06.473638  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:06.473679  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:06.972552  399286 type.go:168] "Request Body" body=""
	I1206 10:49:06.972644  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:06.972984  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:07.472897  399286 type.go:168] "Request Body" body=""
	I1206 10:49:07.472974  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:07.473313  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:07.973068  399286 type.go:168] "Request Body" body=""
	I1206 10:49:07.973145  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:07.973511  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:08.473278  399286 type.go:168] "Request Body" body=""
	I1206 10:49:08.473354  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:08.473693  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:08.473753  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:08.972455  399286 type.go:168] "Request Body" body=""
	I1206 10:49:08.972536  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:08.972879  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:09.472560  399286 type.go:168] "Request Body" body=""
	I1206 10:49:09.472627  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:09.472882  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:09.972560  399286 type.go:168] "Request Body" body=""
	I1206 10:49:09.972633  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:09.972993  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:10.472707  399286 type.go:168] "Request Body" body=""
	I1206 10:49:10.472787  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:10.473125  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:10.972519  399286 type.go:168] "Request Body" body=""
	I1206 10:49:10.972593  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:10.972923  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:10.972987  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:11.472665  399286 type.go:168] "Request Body" body=""
	I1206 10:49:11.472741  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:11.473101  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:11.972857  399286 type.go:168] "Request Body" body=""
	I1206 10:49:11.972931  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:11.973257  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:12.472588  399286 type.go:168] "Request Body" body=""
	I1206 10:49:12.472664  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:12.472989  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:12.972552  399286 type.go:168] "Request Body" body=""
	I1206 10:49:12.972628  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:12.972971  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:12.973026  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:13.472582  399286 type.go:168] "Request Body" body=""
	I1206 10:49:13.472659  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:13.473010  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:13.972527  399286 type.go:168] "Request Body" body=""
	I1206 10:49:13.972603  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:13.972984  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:14.472669  399286 type.go:168] "Request Body" body=""
	I1206 10:49:14.472750  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:14.473111  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:14.972837  399286 type.go:168] "Request Body" body=""
	I1206 10:49:14.972917  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:14.973262  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:14.973321  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:15.472591  399286 type.go:168] "Request Body" body=""
	I1206 10:49:15.472663  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:15.472956  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:15.972563  399286 type.go:168] "Request Body" body=""
	I1206 10:49:15.972639  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:15.972991  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:16.472531  399286 type.go:168] "Request Body" body=""
	I1206 10:49:16.472616  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:16.472996  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:16.972771  399286 type.go:168] "Request Body" body=""
	I1206 10:49:16.972841  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:16.973118  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:17.472567  399286 type.go:168] "Request Body" body=""
	I1206 10:49:17.472648  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:17.472996  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:17.473050  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:17.972717  399286 type.go:168] "Request Body" body=""
	I1206 10:49:17.972792  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:17.973099  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:18.472513  399286 type.go:168] "Request Body" body=""
	I1206 10:49:18.472587  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:18.472880  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:18.972548  399286 type.go:168] "Request Body" body=""
	I1206 10:49:18.972624  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:18.972945  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:19.472558  399286 type.go:168] "Request Body" body=""
	I1206 10:49:19.472639  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:19.472985  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:19.972653  399286 type.go:168] "Request Body" body=""
	I1206 10:49:19.972729  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:19.973082  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:19.973145  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:20.472552  399286 type.go:168] "Request Body" body=""
	I1206 10:49:20.472627  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:20.472962  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:20.972548  399286 type.go:168] "Request Body" body=""
	I1206 10:49:20.972633  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:20.972967  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:21.472516  399286 type.go:168] "Request Body" body=""
	I1206 10:49:21.472590  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:21.472909  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:21.972841  399286 type.go:168] "Request Body" body=""
	I1206 10:49:21.972916  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:21.973264  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:21.973322  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:22.473122  399286 type.go:168] "Request Body" body=""
	I1206 10:49:22.473197  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:22.473559  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:22.973364  399286 type.go:168] "Request Body" body=""
	I1206 10:49:22.973440  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:22.973787  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:23.472556  399286 type.go:168] "Request Body" body=""
	I1206 10:49:23.472643  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:23.472989  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:23.972688  399286 type.go:168] "Request Body" body=""
	I1206 10:49:23.972770  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:23.973107  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:24.472802  399286 type.go:168] "Request Body" body=""
	I1206 10:49:24.472877  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:24.473193  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:24.473243  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:24.972584  399286 type.go:168] "Request Body" body=""
	I1206 10:49:24.972665  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:24.973023  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:25.472744  399286 type.go:168] "Request Body" body=""
	I1206 10:49:25.472827  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:25.473187  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:25.972875  399286 type.go:168] "Request Body" body=""
	I1206 10:49:25.972942  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:25.973212  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:26.473315  399286 type.go:168] "Request Body" body=""
	I1206 10:49:26.473401  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:26.473746  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:26.473798  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:26.972480  399286 type.go:168] "Request Body" body=""
	I1206 10:49:26.972564  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:26.972910  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:27.472459  399286 type.go:168] "Request Body" body=""
	I1206 10:49:27.472532  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:27.472791  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:27.972488  399286 type.go:168] "Request Body" body=""
	I1206 10:49:27.972566  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:27.972886  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:28.472548  399286 type.go:168] "Request Body" body=""
	I1206 10:49:28.472623  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:28.472959  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:28.972639  399286 type.go:168] "Request Body" body=""
	I1206 10:49:28.972711  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:28.973000  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:28.973048  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:29.472558  399286 type.go:168] "Request Body" body=""
	I1206 10:49:29.472637  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:29.472984  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:29.972552  399286 type.go:168] "Request Body" body=""
	I1206 10:49:29.972639  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:29.972980  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:30.472653  399286 type.go:168] "Request Body" body=""
	I1206 10:49:30.472729  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:30.473004  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:30.972581  399286 type.go:168] "Request Body" body=""
	I1206 10:49:30.972663  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:30.972997  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:31.472551  399286 type.go:168] "Request Body" body=""
	I1206 10:49:31.472633  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:31.472995  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:31.473053  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:31.972765  399286 type.go:168] "Request Body" body=""
	I1206 10:49:31.972832  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:31.973098  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:32.472547  399286 type.go:168] "Request Body" body=""
	I1206 10:49:32.472631  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:32.473016  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:32.972568  399286 type.go:168] "Request Body" body=""
	I1206 10:49:32.972645  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:32.972982  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:33.472517  399286 type.go:168] "Request Body" body=""
	I1206 10:49:33.472591  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:33.472911  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:33.972503  399286 type.go:168] "Request Body" body=""
	I1206 10:49:33.972576  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:33.972901  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:33.972964  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:34.472657  399286 type.go:168] "Request Body" body=""
	I1206 10:49:34.472734  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:34.473129  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:34.972818  399286 type.go:168] "Request Body" body=""
	I1206 10:49:34.972889  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:34.973175  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:35.472878  399286 type.go:168] "Request Body" body=""
	I1206 10:49:35.472955  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:35.473329  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:35.973094  399286 type.go:168] "Request Body" body=""
	I1206 10:49:35.973174  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:35.973494  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:35.973549  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:36.472432  399286 type.go:168] "Request Body" body=""
	I1206 10:49:36.472505  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:36.472781  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:36.972828  399286 type.go:168] "Request Body" body=""
	I1206 10:49:36.972905  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:36.973252  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:37.472563  399286 type.go:168] "Request Body" body=""
	I1206 10:49:37.472637  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:37.472994  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:37.972683  399286 type.go:168] "Request Body" body=""
	I1206 10:49:37.972763  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:37.973077  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:38.472554  399286 type.go:168] "Request Body" body=""
	I1206 10:49:38.472633  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:38.472969  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:38.473033  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:38.972729  399286 type.go:168] "Request Body" body=""
	I1206 10:49:38.972808  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:38.973142  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:39.472497  399286 type.go:168] "Request Body" body=""
	I1206 10:49:39.472571  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:39.472854  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:39.972565  399286 type.go:168] "Request Body" body=""
	I1206 10:49:39.972639  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:39.972983  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:40.472566  399286 type.go:168] "Request Body" body=""
	I1206 10:49:40.472647  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:40.472966  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:40.972679  399286 type.go:168] "Request Body" body=""
	I1206 10:49:40.972760  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:40.973065  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:40.973121  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:41.472568  399286 type.go:168] "Request Body" body=""
	I1206 10:49:41.472657  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:41.473025  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:41.972922  399286 type.go:168] "Request Body" body=""
	I1206 10:49:41.972998  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:41.973339  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:42.473053  399286 type.go:168] "Request Body" body=""
	I1206 10:49:42.473124  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:42.473408  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:42.973276  399286 type.go:168] "Request Body" body=""
	I1206 10:49:42.973355  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:42.973694  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:42.973752  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:43.473503  399286 type.go:168] "Request Body" body=""
	I1206 10:49:43.473574  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:43.473918  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:43.972599  399286 type.go:168] "Request Body" body=""
	I1206 10:49:43.972670  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:43.973000  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:44.472572  399286 type.go:168] "Request Body" body=""
	I1206 10:49:44.472649  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:44.472982  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:44.972582  399286 type.go:168] "Request Body" body=""
	I1206 10:49:44.972670  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:44.973019  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:45.472513  399286 type.go:168] "Request Body" body=""
	I1206 10:49:45.472583  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:45.472857  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:45.472901  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:45.972615  399286 type.go:168] "Request Body" body=""
	I1206 10:49:45.972695  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:45.973015  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:46.472495  399286 type.go:168] "Request Body" body=""
	I1206 10:49:46.472575  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:46.472931  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:46.972626  399286 type.go:168] "Request Body" body=""
	I1206 10:49:46.974621  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:46.974930  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:47.472626  399286 type.go:168] "Request Body" body=""
	I1206 10:49:47.472726  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:47.473070  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:47.473127  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:47.972571  399286 type.go:168] "Request Body" body=""
	I1206 10:49:47.972667  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:47.972989  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:48.472520  399286 type.go:168] "Request Body" body=""
	I1206 10:49:48.472595  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:48.472920  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:48.972585  399286 type.go:168] "Request Body" body=""
	I1206 10:49:48.972669  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:48.973030  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:49.472592  399286 type.go:168] "Request Body" body=""
	I1206 10:49:49.472671  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:49.473006  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:49.972683  399286 type.go:168] "Request Body" body=""
	I1206 10:49:49.972754  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:49.973062  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:49.973108  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:50.472567  399286 type.go:168] "Request Body" body=""
	I1206 10:49:50.472644  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:50.472979  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:50.972718  399286 type.go:168] "Request Body" body=""
	I1206 10:49:50.972795  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:50.973180  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:51.472470  399286 type.go:168] "Request Body" body=""
	I1206 10:49:51.472554  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:51.472819  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:51.972847  399286 type.go:168] "Request Body" body=""
	I1206 10:49:51.972931  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:51.973379  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:51.973433  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:52.473217  399286 type.go:168] "Request Body" body=""
	I1206 10:49:52.473304  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:52.473657  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:52.973426  399286 type.go:168] "Request Body" body=""
	I1206 10:49:52.973497  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:52.973879  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:53.472575  399286 type.go:168] "Request Body" body=""
	I1206 10:49:53.472654  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:53.473006  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:53.972732  399286 type.go:168] "Request Body" body=""
	I1206 10:49:53.972810  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:53.973150  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:54.472486  399286 type.go:168] "Request Body" body=""
	I1206 10:49:54.472557  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:54.472823  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:54.472866  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:54.972537  399286 type.go:168] "Request Body" body=""
	I1206 10:49:54.972613  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:54.972990  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:55.472700  399286 type.go:168] "Request Body" body=""
	I1206 10:49:55.472773  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:55.473120  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:55.972590  399286 type.go:168] "Request Body" body=""
	I1206 10:49:55.972662  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:55.972932  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:56.472845  399286 type.go:168] "Request Body" body=""
	I1206 10:49:56.472928  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:56.473307  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:56.473367  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:56.973002  399286 type.go:168] "Request Body" body=""
	I1206 10:49:56.973079  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:56.973419  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:57.473158  399286 type.go:168] "Request Body" body=""
	I1206 10:49:57.473232  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:57.473497  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:57.973292  399286 type.go:168] "Request Body" body=""
	I1206 10:49:57.973367  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:57.973704  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:58.473494  399286 type.go:168] "Request Body" body=""
	I1206 10:49:58.473568  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:58.473902  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:58.473963  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:58.972557  399286 type.go:168] "Request Body" body=""
	I1206 10:49:58.972629  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:58.972908  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:59.472542  399286 type.go:168] "Request Body" body=""
	I1206 10:49:59.472622  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:59.472961  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:59.972698  399286 type.go:168] "Request Body" body=""
	I1206 10:49:59.972792  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:59.973143  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:00.475191  399286 type.go:168] "Request Body" body=""
	I1206 10:50:00.475305  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:00.475740  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:00.475791  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:00.972513  399286 type.go:168] "Request Body" body=""
	I1206 10:50:00.972593  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:00.972946  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:01.472607  399286 type.go:168] "Request Body" body=""
	I1206 10:50:01.472698  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:01.473000  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:01.972891  399286 type.go:168] "Request Body" body=""
	I1206 10:50:01.972964  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:01.973246  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:02.473133  399286 type.go:168] "Request Body" body=""
	I1206 10:50:02.473209  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:02.473541  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:02.973359  399286 type.go:168] "Request Body" body=""
	I1206 10:50:02.973436  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:02.973744  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:02.973794  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:03.472437  399286 type.go:168] "Request Body" body=""
	I1206 10:50:03.472515  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:03.472786  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:03.972517  399286 type.go:168] "Request Body" body=""
	I1206 10:50:03.972617  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:03.972974  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:04.472690  399286 type.go:168] "Request Body" body=""
	I1206 10:50:04.472770  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:04.473092  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:04.972613  399286 type.go:168] "Request Body" body=""
	I1206 10:50:04.972688  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:04.973025  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:05.472589  399286 type.go:168] "Request Body" body=""
	I1206 10:50:05.472662  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:05.472985  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:05.473041  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:05.972699  399286 type.go:168] "Request Body" body=""
	I1206 10:50:05.972774  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:05.973134  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:06.472872  399286 type.go:168] "Request Body" body=""
	I1206 10:50:06.472950  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:06.473229  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:06.973317  399286 type.go:168] "Request Body" body=""
	I1206 10:50:06.973399  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:06.973730  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:07.472451  399286 type.go:168] "Request Body" body=""
	I1206 10:50:07.472529  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:07.472886  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:07.972570  399286 type.go:168] "Request Body" body=""
	I1206 10:50:07.972651  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:07.972971  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:07.973027  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:08.472555  399286 type.go:168] "Request Body" body=""
	I1206 10:50:08.472629  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:08.472968  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:08.972685  399286 type.go:168] "Request Body" body=""
	I1206 10:50:08.972768  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:08.973126  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:09.472810  399286 type.go:168] "Request Body" body=""
	I1206 10:50:09.472880  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:09.473152  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:09.972581  399286 type.go:168] "Request Body" body=""
	I1206 10:50:09.972655  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:09.973005  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:09.973065  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:10.472745  399286 type.go:168] "Request Body" body=""
	I1206 10:50:10.472826  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:10.473165  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:10.972520  399286 type.go:168] "Request Body" body=""
	I1206 10:50:10.972626  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:10.972896  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:11.472575  399286 type.go:168] "Request Body" body=""
	I1206 10:50:11.472653  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:11.472988  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:11.972973  399286 type.go:168] "Request Body" body=""
	I1206 10:50:11.973057  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:11.973408  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:11.973457  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:12.473192  399286 type.go:168] "Request Body" body=""
	I1206 10:50:12.473263  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:12.473529  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:12.973277  399286 type.go:168] "Request Body" body=""
	I1206 10:50:12.973358  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:12.973714  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:13.473443  399286 type.go:168] "Request Body" body=""
	I1206 10:50:13.473531  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:13.473882  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:13.972569  399286 type.go:168] "Request Body" body=""
	I1206 10:50:13.972666  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:13.972977  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:14.472568  399286 type.go:168] "Request Body" body=""
	I1206 10:50:14.472647  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:14.472975  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:14.473038  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:14.972576  399286 type.go:168] "Request Body" body=""
	I1206 10:50:14.972652  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:14.972994  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:15.472548  399286 type.go:168] "Request Body" body=""
	I1206 10:50:15.472616  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:15.472889  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:15.972575  399286 type.go:168] "Request Body" body=""
	I1206 10:50:15.972652  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:15.972980  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:16.472885  399286 type.go:168] "Request Body" body=""
	I1206 10:50:16.472971  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:16.473326  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:16.473380  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:16.973027  399286 type.go:168] "Request Body" body=""
	I1206 10:50:16.973096  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:16.973374  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:17.473136  399286 type.go:168] "Request Body" body=""
	I1206 10:50:17.473212  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:17.473562  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:17.973242  399286 type.go:168] "Request Body" body=""
	I1206 10:50:17.973317  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:17.973682  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:18.473432  399286 type.go:168] "Request Body" body=""
	I1206 10:50:18.473500  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:18.473770  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:18.473813  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:18.972497  399286 type.go:168] "Request Body" body=""
	I1206 10:50:18.972578  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:18.972916  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:19.472629  399286 type.go:168] "Request Body" body=""
	I1206 10:50:19.472708  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:19.473031  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:19.972542  399286 type.go:168] "Request Body" body=""
	I1206 10:50:19.972615  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:19.972928  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:20.472572  399286 type.go:168] "Request Body" body=""
	I1206 10:50:20.472675  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:20.473027  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:20.972608  399286 type.go:168] "Request Body" body=""
	I1206 10:50:20.972691  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:20.973039  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:20.973093  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:21.472736  399286 type.go:168] "Request Body" body=""
	I1206 10:50:21.472814  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:21.473165  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:21.972862  399286 type.go:168] "Request Body" body=""
	I1206 10:50:21.972940  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:21.973280  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:22.473081  399286 type.go:168] "Request Body" body=""
	I1206 10:50:22.473165  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:22.473518  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:22.973293  399286 type.go:168] "Request Body" body=""
	I1206 10:50:22.973363  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:22.973736  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:22.973785  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:23.472452  399286 type.go:168] "Request Body" body=""
	I1206 10:50:23.472536  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:23.472892  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:23.972617  399286 type.go:168] "Request Body" body=""
	I1206 10:50:23.972696  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:23.973045  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:24.472734  399286 type.go:168] "Request Body" body=""
	I1206 10:50:24.472803  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:24.473087  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:24.972771  399286 type.go:168] "Request Body" body=""
	I1206 10:50:24.972846  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:24.973213  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:25.472766  399286 type.go:168] "Request Body" body=""
	I1206 10:50:25.472842  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:25.473168  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:25.473226  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:25.972592  399286 type.go:168] "Request Body" body=""
	I1206 10:50:25.972661  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:25.972949  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:26.472514  399286 type.go:168] "Request Body" body=""
	I1206 10:50:26.472594  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:26.472931  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:26.972899  399286 type.go:168] "Request Body" body=""
	I1206 10:50:26.972973  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:26.973261  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:27.473424  399286 type.go:168] "Request Body" body=""
	I1206 10:50:27.473500  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:27.473765  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:27.473815  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:27.972509  399286 type.go:168] "Request Body" body=""
	I1206 10:50:27.972592  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:27.972936  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:28.472486  399286 type.go:168] "Request Body" body=""
	I1206 10:50:28.472562  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:28.472923  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:28.972440  399286 type.go:168] "Request Body" body=""
	I1206 10:50:28.972512  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:28.972780  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:29.472546  399286 type.go:168] "Request Body" body=""
	I1206 10:50:29.472624  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:29.472988  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:29.972566  399286 type.go:168] "Request Body" body=""
	I1206 10:50:29.972650  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:29.972976  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:29.973030  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:30.472526  399286 type.go:168] "Request Body" body=""
	I1206 10:50:30.472595  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:30.472867  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:30.972538  399286 type.go:168] "Request Body" body=""
	I1206 10:50:30.972619  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:30.972967  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:31.472532  399286 type.go:168] "Request Body" body=""
	I1206 10:50:31.472614  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:31.472943  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:31.972822  399286 type.go:168] "Request Body" body=""
	I1206 10:50:31.972898  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:31.973163  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:31.973204  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:32.472846  399286 type.go:168] "Request Body" body=""
	I1206 10:50:32.472938  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:32.473300  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:32.973154  399286 type.go:168] "Request Body" body=""
	I1206 10:50:32.973228  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:32.973551  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:33.473237  399286 type.go:168] "Request Body" body=""
	I1206 10:50:33.473313  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:33.473581  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:33.973393  399286 type.go:168] "Request Body" body=""
	I1206 10:50:33.973465  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:33.973800  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:33.973854  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:34.472558  399286 type.go:168] "Request Body" body=""
	I1206 10:50:34.472637  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:34.472972  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:34.972519  399286 type.go:168] "Request Body" body=""
	I1206 10:50:34.972599  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:34.972924  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:35.472616  399286 type.go:168] "Request Body" body=""
	I1206 10:50:35.472695  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:35.473014  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:35.972579  399286 type.go:168] "Request Body" body=""
	I1206 10:50:35.972655  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:35.973034  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:36.472776  399286 type.go:168] "Request Body" body=""
	I1206 10:50:36.472844  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:36.473149  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:36.473213  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:36.972912  399286 type.go:168] "Request Body" body=""
	I1206 10:50:36.972989  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:36.973334  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:37.473153  399286 type.go:168] "Request Body" body=""
	I1206 10:50:37.473234  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:37.473545  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:37.973320  399286 type.go:168] "Request Body" body=""
	I1206 10:50:37.973389  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:37.973719  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:38.473508  399286 type.go:168] "Request Body" body=""
	I1206 10:50:38.473585  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:38.473917  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:38.473974  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:38.972671  399286 type.go:168] "Request Body" body=""
	I1206 10:50:38.972749  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:38.973130  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:39.472813  399286 type.go:168] "Request Body" body=""
	I1206 10:50:39.472890  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:39.473190  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:39.972557  399286 type.go:168] "Request Body" body=""
	I1206 10:50:39.972649  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:39.972986  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:40.472571  399286 type.go:168] "Request Body" body=""
	I1206 10:50:40.472658  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:40.472975  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:40.972515  399286 type.go:168] "Request Body" body=""
	I1206 10:50:40.972584  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:40.972892  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:40.972940  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:41.472601  399286 type.go:168] "Request Body" body=""
	I1206 10:50:41.472684  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:41.473063  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:41.972896  399286 type.go:168] "Request Body" body=""
	I1206 10:50:41.972981  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:41.973322  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:42.472688  399286 type.go:168] "Request Body" body=""
	I1206 10:50:42.472753  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:42.473021  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:42.972603  399286 type.go:168] "Request Body" body=""
	I1206 10:50:42.972678  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:42.973024  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:42.973077  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:43.472713  399286 type.go:168] "Request Body" body=""
	I1206 10:50:43.472795  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:43.473163  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:43.972566  399286 type.go:168] "Request Body" body=""
	I1206 10:50:43.972641  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:43.972943  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:44.472578  399286 type.go:168] "Request Body" body=""
	I1206 10:50:44.472651  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:44.472949  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:44.972603  399286 type.go:168] "Request Body" body=""
	I1206 10:50:44.972680  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:44.973055  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:44.973112  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:45.472760  399286 type.go:168] "Request Body" body=""
	I1206 10:50:45.472833  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:45.473168  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:45.972582  399286 type.go:168] "Request Body" body=""
	I1206 10:50:45.972658  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:45.972953  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:46.472685  399286 type.go:168] "Request Body" body=""
	I1206 10:50:46.472772  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:46.473240  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:46.972953  399286 type.go:168] "Request Body" body=""
	I1206 10:50:46.973034  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:46.973311  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:46.973368  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:47.473089  399286 type.go:168] "Request Body" body=""
	I1206 10:50:47.473162  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:47.473495  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:47.973337  399286 type.go:168] "Request Body" body=""
	I1206 10:50:47.973414  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:47.973765  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:48.472461  399286 type.go:168] "Request Body" body=""
	I1206 10:50:48.472532  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:48.472800  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:48.972484  399286 type.go:168] "Request Body" body=""
	I1206 10:50:48.972555  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:48.972856  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:49.472591  399286 type.go:168] "Request Body" body=""
	I1206 10:50:49.472674  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:49.472971  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:49.473017  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:49.972486  399286 type.go:168] "Request Body" body=""
	I1206 10:50:49.972555  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:49.972818  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:50.472595  399286 type.go:168] "Request Body" body=""
	I1206 10:50:50.472673  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:50.473012  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:50.972610  399286 type.go:168] "Request Body" body=""
	I1206 10:50:50.972682  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:50.973033  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:51.472648  399286 type.go:168] "Request Body" body=""
	I1206 10:50:51.472722  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:51.473053  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:51.473104  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:51.972927  399286 type.go:168] "Request Body" body=""
	I1206 10:50:51.973006  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:51.973306  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:52.473170  399286 type.go:168] "Request Body" body=""
	I1206 10:50:52.473264  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:52.473614  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:52.973403  399286 type.go:168] "Request Body" body=""
	I1206 10:50:52.973483  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:52.973779  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:53.472504  399286 type.go:168] "Request Body" body=""
	I1206 10:50:53.472613  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:53.472956  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:53.972692  399286 type.go:168] "Request Body" body=""
	I1206 10:50:53.972766  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:53.973130  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:53.973190  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:54.472528  399286 type.go:168] "Request Body" body=""
	I1206 10:50:54.472607  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:54.472878  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:54.972605  399286 type.go:168] "Request Body" body=""
	I1206 10:50:54.972688  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:54.973068  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:55.472818  399286 type.go:168] "Request Body" body=""
	I1206 10:50:55.472895  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:55.473202  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:55.972520  399286 type.go:168] "Request Body" body=""
	I1206 10:50:55.972603  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:55.972935  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:56.472654  399286 type.go:168] "Request Body" body=""
	I1206 10:50:56.472729  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:56.473032  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:56.473084  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:56.972842  399286 type.go:168] "Request Body" body=""
	I1206 10:50:56.972920  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:56.973318  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:57.473075  399286 type.go:168] "Request Body" body=""
	I1206 10:50:57.473143  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:57.473455  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:57.973290  399286 type.go:168] "Request Body" body=""
	I1206 10:50:57.973373  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:57.973726  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:58.472463  399286 type.go:168] "Request Body" body=""
	I1206 10:50:58.472542  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:58.472877  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:58.972566  399286 type.go:168] "Request Body" body=""
	I1206 10:50:58.972641  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:58.972980  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:58.973033  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:59.472570  399286 type.go:168] "Request Body" body=""
	I1206 10:50:59.472643  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:59.472941  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:59.972571  399286 type.go:168] "Request Body" body=""
	I1206 10:50:59.972657  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:59.973000  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:00.472549  399286 type.go:168] "Request Body" body=""
	I1206 10:51:00.472645  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:00.473014  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:00.972576  399286 type.go:168] "Request Body" body=""
	I1206 10:51:00.972652  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:00.972971  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:01.472606  399286 type.go:168] "Request Body" body=""
	I1206 10:51:01.472680  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:01.473017  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:51:01.473078  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:51:01.972762  399286 type.go:168] "Request Body" body=""
	I1206 10:51:01.972832  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:01.973108  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:02.472577  399286 type.go:168] "Request Body" body=""
	I1206 10:51:02.472675  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:02.473037  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:02.972793  399286 type.go:168] "Request Body" body=""
	I1206 10:51:02.972870  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:02.973217  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:03.472909  399286 type.go:168] "Request Body" body=""
	I1206 10:51:03.472990  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:03.473316  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:51:03.473367  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:51:03.973130  399286 type.go:168] "Request Body" body=""
	I1206 10:51:03.973216  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:03.973569  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:04.473254  399286 type.go:168] "Request Body" body=""
	I1206 10:51:04.473335  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:04.473708  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:04.973449  399286 type.go:168] "Request Body" body=""
	I1206 10:51:04.973548  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:04.973831  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:05.472562  399286 type.go:168] "Request Body" body=""
	I1206 10:51:05.472640  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:05.472982  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:05.972586  399286 type.go:168] "Request Body" body=""
	I1206 10:51:05.972670  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:05.973021  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:51:05.973092  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:51:06.472600  399286 type.go:168] "Request Body" body=""
	I1206 10:51:06.472689  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:06.473022  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:06.972909  399286 type.go:168] "Request Body" body=""
	I1206 10:51:06.972998  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:06.973336  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:07.473123  399286 type.go:168] "Request Body" body=""
	I1206 10:51:07.473186  399286 node_ready.go:38] duration metric: took 6m0.000853216s for node "functional-196950" to be "Ready" ...
	I1206 10:51:07.476374  399286 out.go:203] 
	W1206 10:51:07.479349  399286 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1206 10:51:07.479391  399286 out.go:285] * 
	W1206 10:51:07.481554  399286 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:51:07.484691  399286 out.go:203] 
	
	
	==> CRI-O <==
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.12702865Z" level=info msg="Using the internal default seccomp profile"
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.127091436Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.127143753Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.127203191Z" level=info msg="RDT not available in the host system"
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.127272804Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.128176526Z" level=info msg="Conmon does support the --sync option"
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.128207641Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.128227021Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.128935041Z" level=info msg="Conmon does support the --sync option"
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.128966852Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.129131408Z" level=info msg="Updated default CNI network name to "
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.129748276Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oc
i/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_
memory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_d
ir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [c
rio.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.130139541Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.130206373Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.184525954Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.184710276Z" level=info msg="Starting seccomp notifier watcher"
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.184779404Z" level=info msg="Create NRI interface"
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.184893957Z" level=info msg="built-in NRI default validator is disabled"
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.18491479Z" level=info msg="runtime interface created"
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.18492819Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.184937543Z" level=info msg="runtime interface starting up..."
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.184944682Z" level=info msg="starting plugins..."
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.184959082Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.185027768Z" level=info msg="No systemd watchdog enabled"
	Dec 06 10:45:05 functional-196950 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:51:09.671534    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:51:09.672113    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:51:09.674434    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:51:09.675709    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:51:09.676444    8593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014752] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503231] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.065820] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.901896] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.944603] kauditd_printk_skb: 39 callbacks suppressed
	[Dec 6 09:04] hrtimer: interrupt took 32230230 ns
	[Dec 6 10:25] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 6 10:26] overlayfs: idmapped layers are currently not supported
	[  +0.066821] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 6 10:32] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:33] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 10:51:09 up  2:33,  0 user,  load average: 0.31, 0.30, 0.86
	Linux functional-196950 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 10:51:07 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:51:07 functional-196950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 810.
	Dec 06 10:51:07 functional-196950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:07 functional-196950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:08 functional-196950 kubelet[8483]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:51:08 functional-196950 kubelet[8483]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:51:08 functional-196950 kubelet[8483]: E1206 10:51:08.052157    8483 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:51:08 functional-196950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:51:08 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:51:08 functional-196950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 811.
	Dec 06 10:51:08 functional-196950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:08 functional-196950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:08 functional-196950 kubelet[8503]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:51:08 functional-196950 kubelet[8503]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:51:08 functional-196950 kubelet[8503]: E1206 10:51:08.790083    8503 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:51:08 functional-196950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:51:08 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:51:09 functional-196950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 812.
	Dec 06 10:51:09 functional-196950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:09 functional-196950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:09 functional-196950 kubelet[8555]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:51:09 functional-196950 kubelet[8555]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:51:09 functional-196950 kubelet[8555]: E1206 10:51:09.535431    8555 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:51:09 functional-196950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:51:09 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-196950 -n functional-196950
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-196950 -n functional-196950: exit status 2 (353.852377ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-196950" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (369.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-196950 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-196950 get po -A: exit status 1 (59.779685ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-196950 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-196950 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-196950 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-196950
helpers_test.go:243: (dbg) docker inspect functional-196950:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1",
	        "Created": "2025-12-06T10:36:45.201779678Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 393848,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T10:36:45.318229053Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/hostname",
	        "HostsPath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/hosts",
	        "LogPath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1-json.log",
	        "Name": "/functional-196950",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-196950:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-196950",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1",
	                "LowerDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1-init/diff:/var/lib/docker/overlay2/5011226d55616c9977b14c1fe617d1302fe59373df05ce8ec6e21b79143a1c57/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-196950",
	                "Source": "/var/lib/docker/volumes/functional-196950/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-196950",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-196950",
	                "name.minikube.sigs.k8s.io": "functional-196950",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9b8f961d55d7529aed7b841f2ac9f818c22ff12b8ad73f2d6bcee22656d9749a",
	            "SandboxKey": "/var/run/docker/netns/9b8f961d55d7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33158"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-196950": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:c1:40:2a:93:47",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a566bfdfd33a868cf61e5b18b36cbd55e9868f24cbb091e055ae606aeb8c6f03",
	                    "EndpointID": "452fe32bde0c42c4c35d700488ae93aeecc6c6a971ac6f1a8a492dbc4b328ed9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-196950",
	                        "d150aac7296d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-196950 -n functional-196950
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-196950 -n functional-196950: exit status 2 (322.440744ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-196950 logs -n 25: (1.066180806s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-205266 image ls                                                                                                                                │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh            │ functional-205266 ssh sudo cat /usr/share/ca-certificates/364855.pem                                                                                      │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image          │ functional-205266 image save kicbase/echo-server:functional-205266 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh            │ functional-205266 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image          │ functional-205266 image rm kicbase/echo-server:functional-205266 --alsologtostderr                                                                        │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh            │ functional-205266 ssh sudo cat /etc/ssl/certs/3648552.pem                                                                                                 │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image          │ functional-205266 image ls                                                                                                                                │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh            │ functional-205266 ssh sudo cat /usr/share/ca-certificates/3648552.pem                                                                                     │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image          │ functional-205266 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh            │ functional-205266 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image          │ functional-205266 image ls                                                                                                                                │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image          │ functional-205266 image save --daemon kicbase/echo-server:functional-205266 --alsologtostderr                                                             │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ update-context │ functional-205266 update-context --alsologtostderr -v=2                                                                                                   │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ update-context │ functional-205266 update-context --alsologtostderr -v=2                                                                                                   │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ update-context │ functional-205266 update-context --alsologtostderr -v=2                                                                                                   │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image          │ functional-205266 image ls --format short --alsologtostderr                                                                                               │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image          │ functional-205266 image ls --format yaml --alsologtostderr                                                                                                │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh            │ functional-205266 ssh pgrep buildkitd                                                                                                                     │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │                     │
	│ image          │ functional-205266 image ls --format json --alsologtostderr                                                                                                │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image          │ functional-205266 image build -t localhost/my-image:functional-205266 testdata/build --alsologtostderr                                                    │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image          │ functional-205266 image ls --format table --alsologtostderr                                                                                               │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image          │ functional-205266 image ls                                                                                                                                │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ delete         │ -p functional-205266                                                                                                                                      │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ start          │ -p functional-196950 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0         │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │                     │
	│ start          │ -p functional-196950 --alsologtostderr -v=8                                                                                                               │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:45 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:45:01
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:45:01.787203  399286 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:45:01.787433  399286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:45:01.787467  399286 out.go:374] Setting ErrFile to fd 2...
	I1206 10:45:01.787489  399286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:45:01.787778  399286 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 10:45:01.788186  399286 out.go:368] Setting JSON to false
	I1206 10:45:01.789151  399286 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8853,"bootTime":1765009049,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 10:45:01.789259  399286 start.go:143] virtualization:  
	I1206 10:45:01.792729  399286 out.go:179] * [functional-196950] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:45:01.796494  399286 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 10:45:01.796574  399286 notify.go:221] Checking for updates...
	I1206 10:45:01.802323  399286 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:45:01.805290  399286 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 10:45:01.808768  399286 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-362985/.minikube
	I1206 10:45:01.811515  399286 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:45:01.814379  399286 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:45:01.817672  399286 config.go:182] Loaded profile config "functional-196950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:45:01.817798  399286 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:45:01.851887  399286 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:45:01.852009  399286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:45:01.921321  399286 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:45:01.909571102 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:45:01.921426  399286 docker.go:319] overlay module found
	I1206 10:45:01.926314  399286 out.go:179] * Using the docker driver based on existing profile
	I1206 10:45:01.929149  399286 start.go:309] selected driver: docker
	I1206 10:45:01.929174  399286 start.go:927] validating driver "docker" against &{Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:45:01.929299  399286 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:45:01.929402  399286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:45:02.005684  399286 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:45:01.991905909 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:45:02.006178  399286 cni.go:84] Creating CNI manager for ""
	I1206 10:45:02.006252  399286 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:45:02.006308  399286 start.go:353] cluster config:
	{Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:45:02.012455  399286 out.go:179] * Starting "functional-196950" primary control-plane node in "functional-196950" cluster
	I1206 10:45:02.015293  399286 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 10:45:02.018502  399286 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:45:02.021547  399286 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 10:45:02.021609  399286 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1206 10:45:02.021620  399286 cache.go:65] Caching tarball of preloaded images
	I1206 10:45:02.021746  399286 preload.go:238] Found /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1206 10:45:02.021762  399286 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1206 10:45:02.021883  399286 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/config.json ...
	I1206 10:45:02.022120  399286 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:45:02.058171  399286 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 10:45:02.058196  399286 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 10:45:02.058216  399286 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:45:02.058248  399286 start.go:360] acquireMachinesLock for functional-196950: {Name:mkd2471f275d1d2a438cb4ce89f1d1521a0fb340 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:45:02.058324  399286 start.go:364] duration metric: took 51.241µs to acquireMachinesLock for "functional-196950"
	I1206 10:45:02.058347  399286 start.go:96] Skipping create...Using existing machine configuration
	I1206 10:45:02.058352  399286 fix.go:54] fixHost starting: 
	I1206 10:45:02.058623  399286 cli_runner.go:164] Run: docker container inspect functional-196950 --format={{.State.Status}}
	I1206 10:45:02.075952  399286 fix.go:112] recreateIfNeeded on functional-196950: state=Running err=<nil>
	W1206 10:45:02.075984  399286 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 10:45:02.079219  399286 out.go:252] * Updating the running docker "functional-196950" container ...
	I1206 10:45:02.079261  399286 machine.go:94] provisionDockerMachine start ...
	I1206 10:45:02.079396  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:02.097606  399286 main.go:143] libmachine: Using SSH client type: native
	I1206 10:45:02.097945  399286 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1206 10:45:02.097963  399286 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:45:02.251117  399286 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-196950
	
	I1206 10:45:02.251145  399286 ubuntu.go:182] provisioning hostname "functional-196950"
	I1206 10:45:02.251226  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:02.270896  399286 main.go:143] libmachine: Using SSH client type: native
	I1206 10:45:02.271293  399286 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1206 10:45:02.271357  399286 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-196950 && echo "functional-196950" | sudo tee /etc/hostname
	I1206 10:45:02.434988  399286 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-196950
	
	I1206 10:45:02.435098  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:02.453713  399286 main.go:143] libmachine: Using SSH client type: native
	I1206 10:45:02.454033  399286 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1206 10:45:02.454055  399286 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-196950' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-196950/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-196950' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:45:02.607868  399286 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:45:02.607903  399286 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-362985/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-362985/.minikube}
	I1206 10:45:02.607940  399286 ubuntu.go:190] setting up certificates
	I1206 10:45:02.607949  399286 provision.go:84] configureAuth start
	I1206 10:45:02.608015  399286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-196950
	I1206 10:45:02.626134  399286 provision.go:143] copyHostCerts
	I1206 10:45:02.626186  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem
	I1206 10:45:02.626227  399286 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem, removing ...
	I1206 10:45:02.626247  399286 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem
	I1206 10:45:02.626323  399286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem (1082 bytes)
	I1206 10:45:02.626456  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem
	I1206 10:45:02.626477  399286 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem, removing ...
	I1206 10:45:02.626487  399286 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem
	I1206 10:45:02.626523  399286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem (1123 bytes)
	I1206 10:45:02.626584  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem
	I1206 10:45:02.626607  399286 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem, removing ...
	I1206 10:45:02.626611  399286 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem
	I1206 10:45:02.626634  399286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem (1679 bytes)
	I1206 10:45:02.626683  399286 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem org=jenkins.functional-196950 san=[127.0.0.1 192.168.49.2 functional-196950 localhost minikube]
	I1206 10:45:02.961448  399286 provision.go:177] copyRemoteCerts
	I1206 10:45:02.961531  399286 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:45:02.961575  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:02.978755  399286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:45:03.095893  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1206 10:45:03.095982  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 10:45:03.114611  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1206 10:45:03.114706  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 10:45:03.135133  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1206 10:45:03.135195  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 10:45:03.153562  399286 provision.go:87] duration metric: took 545.588133ms to configureAuth
	I1206 10:45:03.153601  399286 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:45:03.153843  399286 config.go:182] Loaded profile config "functional-196950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:45:03.153992  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:03.171946  399286 main.go:143] libmachine: Using SSH client type: native
	I1206 10:45:03.172256  399286 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1206 10:45:03.172279  399286 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 10:45:03.524489  399286 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 10:45:03.524512  399286 machine.go:97] duration metric: took 1.445242076s to provisionDockerMachine
	I1206 10:45:03.524523  399286 start.go:293] postStartSetup for "functional-196950" (driver="docker")
	I1206 10:45:03.524536  399286 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:45:03.524603  399286 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:45:03.524644  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:03.555449  399286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:45:03.668233  399286 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:45:03.672046  399286 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1206 10:45:03.672068  399286 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1206 10:45:03.672073  399286 command_runner.go:130] > VERSION_ID="12"
	I1206 10:45:03.672078  399286 command_runner.go:130] > VERSION="12 (bookworm)"
	I1206 10:45:03.672084  399286 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1206 10:45:03.672087  399286 command_runner.go:130] > ID=debian
	I1206 10:45:03.672092  399286 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1206 10:45:03.672114  399286 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1206 10:45:03.672130  399286 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1206 10:45:03.672206  399286 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:45:03.672228  399286 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:45:03.672240  399286 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-362985/.minikube/addons for local assets ...
	I1206 10:45:03.672300  399286 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-362985/.minikube/files for local assets ...
	I1206 10:45:03.672390  399286 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem -> 3648552.pem in /etc/ssl/certs
	I1206 10:45:03.672402  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem -> /etc/ssl/certs/3648552.pem
	I1206 10:45:03.672481  399286 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/test/nested/copy/364855/hosts -> hosts in /etc/test/nested/copy/364855
	I1206 10:45:03.672489  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/test/nested/copy/364855/hosts -> /etc/test/nested/copy/364855/hosts
	I1206 10:45:03.672536  399286 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/364855
	I1206 10:45:03.681376  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem --> /etc/ssl/certs/3648552.pem (1708 bytes)
	I1206 10:45:03.700845  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/test/nested/copy/364855/hosts --> /etc/test/nested/copy/364855/hosts (40 bytes)
	I1206 10:45:03.720695  399286 start.go:296] duration metric: took 196.153156ms for postStartSetup
	I1206 10:45:03.720782  399286 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:45:03.720851  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:03.739871  399286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:45:03.844136  399286 command_runner.go:130] > 11%
	I1206 10:45:03.844709  399286 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:45:03.849387  399286 command_runner.go:130] > 174G
	I1206 10:45:03.849978  399286 fix.go:56] duration metric: took 1.791620292s for fixHost
	I1206 10:45:03.850000  399286 start.go:83] releasing machines lock for "functional-196950", held for 1.791664797s
	I1206 10:45:03.850077  399286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-196950
	I1206 10:45:03.867785  399286 ssh_runner.go:195] Run: cat /version.json
	I1206 10:45:03.867838  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:03.868113  399286 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:45:03.868167  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:03.886546  399286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:45:03.911694  399286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:45:03.991370  399286 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764843390-22032", "minikube_version": "v1.37.0", "commit": "d7bfd7d6d80c3eeb1d6cf1c5f081f8642bc1997e"}
	I1206 10:45:03.991537  399286 ssh_runner.go:195] Run: systemctl --version
	I1206 10:45:04.088215  399286 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1206 10:45:04.091250  399286 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1206 10:45:04.091291  399286 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1206 10:45:04.091431  399286 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 10:45:04.130964  399286 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1206 10:45:04.136249  399286 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1206 10:45:04.136293  399286 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:45:04.136352  399286 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:45:04.145113  399286 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 10:45:04.145182  399286 start.go:496] detecting cgroup driver to use...
	I1206 10:45:04.145222  399286 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:45:04.145282  399286 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 10:45:04.161420  399286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 10:45:04.175205  399286 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:45:04.175315  399286 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:45:04.191496  399286 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:45:04.205243  399286 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:45:04.349911  399286 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:45:04.470887  399286 docker.go:234] disabling docker service ...
	I1206 10:45:04.471006  399286 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:45:04.486933  399286 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:45:04.500707  399286 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:45:04.632842  399286 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:45:04.756279  399286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:45:04.770461  399286 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:45:04.785365  399286 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1206 10:45:04.786482  399286 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 10:45:04.786596  399286 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:45:04.796852  399286 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 10:45:04.796980  399286 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:45:04.806654  399286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:45:04.816002  399286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:45:04.825576  399286 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:45:04.834547  399286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:45:04.844889  399286 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:45:04.854032  399286 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:45:04.863103  399286 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:45:04.870297  399286 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1206 10:45:04.871475  399286 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:45:04.879247  399286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:45:04.992959  399286 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 10:45:05.192927  399286 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 10:45:05.193085  399286 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 10:45:05.197937  399286 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1206 10:45:05.197964  399286 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1206 10:45:05.197971  399286 command_runner.go:130] > Device: 0,72	Inode: 1640        Links: 1
	I1206 10:45:05.197987  399286 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 10:45:05.198031  399286 command_runner.go:130] > Access: 2025-12-06 10:45:05.125759427 +0000
	I1206 10:45:05.198049  399286 command_runner.go:130] > Modify: 2025-12-06 10:45:05.125759427 +0000
	I1206 10:45:05.198060  399286 command_runner.go:130] > Change: 2025-12-06 10:45:05.125759427 +0000
	I1206 10:45:05.198063  399286 command_runner.go:130] >  Birth: -
	I1206 10:45:05.198081  399286 start.go:564] Will wait 60s for crictl version
	I1206 10:45:05.198158  399286 ssh_runner.go:195] Run: which crictl
	I1206 10:45:05.202333  399286 command_runner.go:130] > /usr/local/bin/crictl
	I1206 10:45:05.202451  399286 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:45:05.227773  399286 command_runner.go:130] > Version:  0.1.0
	I1206 10:45:05.227855  399286 command_runner.go:130] > RuntimeName:  cri-o
	I1206 10:45:05.227876  399286 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1206 10:45:05.227895  399286 command_runner.go:130] > RuntimeApiVersion:  v1
	I1206 10:45:05.230308  399286 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 10:45:05.230460  399286 ssh_runner.go:195] Run: crio --version
	I1206 10:45:05.261871  399286 command_runner.go:130] > crio version 1.34.3
	I1206 10:45:05.261971  399286 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1206 10:45:05.261992  399286 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1206 10:45:05.262014  399286 command_runner.go:130] >    GitTreeState:   dirty
	I1206 10:45:05.262045  399286 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1206 10:45:05.262062  399286 command_runner.go:130] >    GoVersion:      go1.24.6
	I1206 10:45:05.262083  399286 command_runner.go:130] >    Compiler:       gc
	I1206 10:45:05.262102  399286 command_runner.go:130] >    Platform:       linux/arm64
	I1206 10:45:05.262141  399286 command_runner.go:130] >    Linkmode:       static
	I1206 10:45:05.262176  399286 command_runner.go:130] >    BuildTags:
	I1206 10:45:05.262192  399286 command_runner.go:130] >      static
	I1206 10:45:05.262229  399286 command_runner.go:130] >      netgo
	I1206 10:45:05.262248  399286 command_runner.go:130] >      osusergo
	I1206 10:45:05.262264  399286 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1206 10:45:05.262283  399286 command_runner.go:130] >      seccomp
	I1206 10:45:05.262317  399286 command_runner.go:130] >      apparmor
	I1206 10:45:05.262335  399286 command_runner.go:130] >      selinux
	I1206 10:45:05.262352  399286 command_runner.go:130] >    LDFlags:          unknown
	I1206 10:45:05.262371  399286 command_runner.go:130] >    SeccompEnabled:   true
	I1206 10:45:05.262402  399286 command_runner.go:130] >    AppArmorEnabled:  false
	I1206 10:45:05.263735  399286 ssh_runner.go:195] Run: crio --version
	I1206 10:45:05.292275  399286 command_runner.go:130] > crio version 1.34.3
	I1206 10:45:05.292350  399286 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1206 10:45:05.292370  399286 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1206 10:45:05.292389  399286 command_runner.go:130] >    GitTreeState:   dirty
	I1206 10:45:05.292419  399286 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1206 10:45:05.292445  399286 command_runner.go:130] >    GoVersion:      go1.24.6
	I1206 10:45:05.292464  399286 command_runner.go:130] >    Compiler:       gc
	I1206 10:45:05.292484  399286 command_runner.go:130] >    Platform:       linux/arm64
	I1206 10:45:05.292510  399286 command_runner.go:130] >    Linkmode:       static
	I1206 10:45:05.292529  399286 command_runner.go:130] >    BuildTags:
	I1206 10:45:05.292548  399286 command_runner.go:130] >      static
	I1206 10:45:05.292577  399286 command_runner.go:130] >      netgo
	I1206 10:45:05.292594  399286 command_runner.go:130] >      osusergo
	I1206 10:45:05.292622  399286 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1206 10:45:05.292652  399286 command_runner.go:130] >      seccomp
	I1206 10:45:05.292669  399286 command_runner.go:130] >      apparmor
	I1206 10:45:05.292692  399286 command_runner.go:130] >      selinux
	I1206 10:45:05.292731  399286 command_runner.go:130] >    LDFlags:          unknown
	I1206 10:45:05.292749  399286 command_runner.go:130] >    SeccompEnabled:   true
	I1206 10:45:05.292767  399286 command_runner.go:130] >    AppArmorEnabled:  false
	I1206 10:45:05.300434  399286 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1206 10:45:05.303425  399286 cli_runner.go:164] Run: docker network inspect functional-196950 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:45:05.320718  399286 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 10:45:05.324954  399286 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1206 10:45:05.325142  399286 kubeadm.go:884] updating cluster {Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:45:05.325270  399286 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 10:45:05.325346  399286 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:45:05.356177  399286 command_runner.go:130] > {
	I1206 10:45:05.356195  399286 command_runner.go:130] >   "images":  [
	I1206 10:45:05.356199  399286 command_runner.go:130] >     {
	I1206 10:45:05.356208  399286 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1206 10:45:05.356213  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.356218  399286 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1206 10:45:05.356222  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356226  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.356235  399286 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1206 10:45:05.356243  399286 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1206 10:45:05.356246  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356251  399286 command_runner.go:130] >       "size":  "111333938",
	I1206 10:45:05.356254  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.356259  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.356262  399286 command_runner.go:130] >     },
	I1206 10:45:05.356265  399286 command_runner.go:130] >     {
	I1206 10:45:05.356272  399286 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1206 10:45:05.356285  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.356291  399286 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1206 10:45:05.356294  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356298  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.356307  399286 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1206 10:45:05.356315  399286 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1206 10:45:05.356318  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356322  399286 command_runner.go:130] >       "size":  "29037500",
	I1206 10:45:05.356326  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.356334  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.356337  399286 command_runner.go:130] >     },
	I1206 10:45:05.356340  399286 command_runner.go:130] >     {
	I1206 10:45:05.356346  399286 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1206 10:45:05.356350  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.356355  399286 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1206 10:45:05.356358  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356362  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.356369  399286 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1206 10:45:05.356377  399286 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1206 10:45:05.356380  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356385  399286 command_runner.go:130] >       "size":  "74491780",
	I1206 10:45:05.356389  399286 command_runner.go:130] >       "username":  "nonroot",
	I1206 10:45:05.356393  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.356396  399286 command_runner.go:130] >     },
	I1206 10:45:05.356399  399286 command_runner.go:130] >     {
	I1206 10:45:05.356405  399286 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1206 10:45:05.356409  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.356428  399286 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1206 10:45:05.356433  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356438  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.356446  399286 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1206 10:45:05.356453  399286 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1206 10:45:05.356457  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356465  399286 command_runner.go:130] >       "size":  "60857170",
	I1206 10:45:05.356469  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.356472  399286 command_runner.go:130] >         "value":  "0"
	I1206 10:45:05.356475  399286 command_runner.go:130] >       },
	I1206 10:45:05.356488  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.356492  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.356495  399286 command_runner.go:130] >     },
	I1206 10:45:05.356498  399286 command_runner.go:130] >     {
	I1206 10:45:05.356505  399286 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1206 10:45:05.356508  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.356513  399286 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1206 10:45:05.356516  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356520  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.356528  399286 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1206 10:45:05.356536  399286 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1206 10:45:05.356539  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356543  399286 command_runner.go:130] >       "size":  "84949999",
	I1206 10:45:05.356546  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.356550  399286 command_runner.go:130] >         "value":  "0"
	I1206 10:45:05.356553  399286 command_runner.go:130] >       },
	I1206 10:45:05.356557  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.356561  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.356564  399286 command_runner.go:130] >     },
	I1206 10:45:05.356567  399286 command_runner.go:130] >     {
	I1206 10:45:05.356573  399286 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1206 10:45:05.356577  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.356583  399286 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1206 10:45:05.356586  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356590  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.356598  399286 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1206 10:45:05.356606  399286 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1206 10:45:05.356609  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356617  399286 command_runner.go:130] >       "size":  "72170325",
	I1206 10:45:05.356623  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.356627  399286 command_runner.go:130] >         "value":  "0"
	I1206 10:45:05.356631  399286 command_runner.go:130] >       },
	I1206 10:45:05.356634  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.356638  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.356641  399286 command_runner.go:130] >     },
	I1206 10:45:05.356643  399286 command_runner.go:130] >     {
	I1206 10:45:05.356650  399286 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1206 10:45:05.356654  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.356659  399286 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1206 10:45:05.356662  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356666  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.356674  399286 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1206 10:45:05.356681  399286 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1206 10:45:05.356684  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356688  399286 command_runner.go:130] >       "size":  "74106775",
	I1206 10:45:05.356692  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.356695  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.356698  399286 command_runner.go:130] >     },
	I1206 10:45:05.356701  399286 command_runner.go:130] >     {
	I1206 10:45:05.356708  399286 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1206 10:45:05.356711  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.356716  399286 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1206 10:45:05.356719  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356723  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.356730  399286 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1206 10:45:05.356747  399286 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1206 10:45:05.356751  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356755  399286 command_runner.go:130] >       "size":  "49822549",
	I1206 10:45:05.356759  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.356763  399286 command_runner.go:130] >         "value":  "0"
	I1206 10:45:05.356766  399286 command_runner.go:130] >       },
	I1206 10:45:05.356770  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.356778  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.356781  399286 command_runner.go:130] >     },
	I1206 10:45:05.356784  399286 command_runner.go:130] >     {
	I1206 10:45:05.356790  399286 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1206 10:45:05.356794  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.356798  399286 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1206 10:45:05.356801  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356805  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.356812  399286 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1206 10:45:05.356820  399286 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1206 10:45:05.356823  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356826  399286 command_runner.go:130] >       "size":  "519884",
	I1206 10:45:05.356830  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.356833  399286 command_runner.go:130] >         "value":  "65535"
	I1206 10:45:05.356836  399286 command_runner.go:130] >       },
	I1206 10:45:05.356840  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.356843  399286 command_runner.go:130] >       "pinned":  true
	I1206 10:45:05.356850  399286 command_runner.go:130] >     }
	I1206 10:45:05.356853  399286 command_runner.go:130] >   ]
	I1206 10:45:05.356857  399286 command_runner.go:130] > }
	I1206 10:45:05.358491  399286 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:45:05.358523  399286 crio.go:433] Images already preloaded, skipping extraction
	I1206 10:45:05.358585  399286 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:45:05.381820  399286 command_runner.go:130] > {
	I1206 10:45:05.381840  399286 command_runner.go:130] >   "images":  [
	I1206 10:45:05.381844  399286 command_runner.go:130] >     {
	I1206 10:45:05.381853  399286 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1206 10:45:05.381857  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.381864  399286 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1206 10:45:05.381867  399286 command_runner.go:130] >       ],
	I1206 10:45:05.381871  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.381880  399286 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1206 10:45:05.381888  399286 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1206 10:45:05.381892  399286 command_runner.go:130] >       ],
	I1206 10:45:05.381896  399286 command_runner.go:130] >       "size":  "111333938",
	I1206 10:45:05.381900  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.381909  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.381912  399286 command_runner.go:130] >     },
	I1206 10:45:05.381916  399286 command_runner.go:130] >     {
	I1206 10:45:05.381922  399286 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1206 10:45:05.381926  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.381932  399286 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1206 10:45:05.381935  399286 command_runner.go:130] >       ],
	I1206 10:45:05.381939  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.381947  399286 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1206 10:45:05.381956  399286 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1206 10:45:05.381959  399286 command_runner.go:130] >       ],
	I1206 10:45:05.381963  399286 command_runner.go:130] >       "size":  "29037500",
	I1206 10:45:05.381967  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.381973  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.381977  399286 command_runner.go:130] >     },
	I1206 10:45:05.381980  399286 command_runner.go:130] >     {
	I1206 10:45:05.381987  399286 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1206 10:45:05.381990  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.381999  399286 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1206 10:45:05.382003  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382007  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.382014  399286 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1206 10:45:05.382022  399286 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1206 10:45:05.382025  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382029  399286 command_runner.go:130] >       "size":  "74491780",
	I1206 10:45:05.382033  399286 command_runner.go:130] >       "username":  "nonroot",
	I1206 10:45:05.382037  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.382040  399286 command_runner.go:130] >     },
	I1206 10:45:05.382043  399286 command_runner.go:130] >     {
	I1206 10:45:05.382049  399286 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1206 10:45:05.382053  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.382058  399286 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1206 10:45:05.382063  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382067  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.382074  399286 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1206 10:45:05.382082  399286 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1206 10:45:05.382085  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382089  399286 command_runner.go:130] >       "size":  "60857170",
	I1206 10:45:05.382093  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.382096  399286 command_runner.go:130] >         "value":  "0"
	I1206 10:45:05.382100  399286 command_runner.go:130] >       },
	I1206 10:45:05.382398  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.382411  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.382415  399286 command_runner.go:130] >     },
	I1206 10:45:05.382419  399286 command_runner.go:130] >     {
	I1206 10:45:05.382427  399286 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1206 10:45:05.382437  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.382443  399286 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1206 10:45:05.382446  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382450  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.382463  399286 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1206 10:45:05.382476  399286 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1206 10:45:05.382479  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382484  399286 command_runner.go:130] >       "size":  "84949999",
	I1206 10:45:05.382492  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.382495  399286 command_runner.go:130] >         "value":  "0"
	I1206 10:45:05.382499  399286 command_runner.go:130] >       },
	I1206 10:45:05.382503  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.382507  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.382510  399286 command_runner.go:130] >     },
	I1206 10:45:05.382514  399286 command_runner.go:130] >     {
	I1206 10:45:05.382524  399286 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1206 10:45:05.382528  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.382534  399286 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1206 10:45:05.382541  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382546  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.382555  399286 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1206 10:45:05.382568  399286 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1206 10:45:05.382571  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382575  399286 command_runner.go:130] >       "size":  "72170325",
	I1206 10:45:05.382579  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.382583  399286 command_runner.go:130] >         "value":  "0"
	I1206 10:45:05.382590  399286 command_runner.go:130] >       },
	I1206 10:45:05.382594  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.382597  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.382601  399286 command_runner.go:130] >     },
	I1206 10:45:05.382604  399286 command_runner.go:130] >     {
	I1206 10:45:05.382615  399286 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1206 10:45:05.382618  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.382624  399286 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1206 10:45:05.382627  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382631  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.382643  399286 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1206 10:45:05.382651  399286 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1206 10:45:05.382658  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382666  399286 command_runner.go:130] >       "size":  "74106775",
	I1206 10:45:05.382672  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.382676  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.382679  399286 command_runner.go:130] >     },
	I1206 10:45:05.382682  399286 command_runner.go:130] >     {
	I1206 10:45:05.382693  399286 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1206 10:45:05.382697  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.382702  399286 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1206 10:45:05.382706  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382710  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.382722  399286 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1206 10:45:05.382745  399286 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1206 10:45:05.382753  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382757  399286 command_runner.go:130] >       "size":  "49822549",
	I1206 10:45:05.382761  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.382765  399286 command_runner.go:130] >         "value":  "0"
	I1206 10:45:05.382768  399286 command_runner.go:130] >       },
	I1206 10:45:05.382772  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.382780  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.382783  399286 command_runner.go:130] >     },
	I1206 10:45:05.382786  399286 command_runner.go:130] >     {
	I1206 10:45:05.382793  399286 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1206 10:45:05.382797  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.382805  399286 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1206 10:45:05.382808  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382812  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.382820  399286 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1206 10:45:05.382832  399286 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1206 10:45:05.382835  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382839  399286 command_runner.go:130] >       "size":  "519884",
	I1206 10:45:05.382843  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.382847  399286 command_runner.go:130] >         "value":  "65535"
	I1206 10:45:05.382857  399286 command_runner.go:130] >       },
	I1206 10:45:05.382861  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.382865  399286 command_runner.go:130] >       "pinned":  true
	I1206 10:45:05.382868  399286 command_runner.go:130] >     }
	I1206 10:45:05.382871  399286 command_runner.go:130] >   ]
	I1206 10:45:05.382874  399286 command_runner.go:130] > }
	I1206 10:45:05.396183  399286 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:45:05.396208  399286 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:45:05.396219  399286 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1206 10:45:05.396325  399286 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-196950 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:45:05.396421  399286 ssh_runner.go:195] Run: crio config
	I1206 10:45:05.425462  399286 command_runner.go:130] ! time="2025-12-06T10:45:05.425119459Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1206 10:45:05.425532  399286 command_runner.go:130] ! time="2025-12-06T10:45:05.425157991Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1206 10:45:05.425754  399286 command_runner.go:130] ! time="2025-12-06T10:45:05.425195308Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1206 10:45:05.425797  399286 command_runner.go:130] ! time="2025-12-06T10:45:05.42522017Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1206 10:45:05.425982  399286 command_runner.go:130] ! time="2025-12-06T10:45:05.425299687Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:45:05.426160  399286 command_runner.go:130] ! time="2025-12-06T10:45:05.42561672Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1206 10:45:05.442529  399286 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1206 10:45:05.470811  399286 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1206 10:45:05.470887  399286 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1206 10:45:05.470910  399286 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1206 10:45:05.470925  399286 command_runner.go:130] > #
	I1206 10:45:05.470961  399286 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1206 10:45:05.470990  399286 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1206 10:45:05.471012  399286 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1206 10:45:05.471037  399286 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1206 10:45:05.471066  399286 command_runner.go:130] > # reload'.
	I1206 10:45:05.471089  399286 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1206 10:45:05.471110  399286 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1206 10:45:05.471132  399286 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1206 10:45:05.471165  399286 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1206 10:45:05.471189  399286 command_runner.go:130] > [crio]
	I1206 10:45:05.471211  399286 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1206 10:45:05.471233  399286 command_runner.go:130] > # containers images, in this directory.
	I1206 10:45:05.471266  399286 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1206 10:45:05.471291  399286 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1206 10:45:05.471336  399286 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1206 10:45:05.471369  399286 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1206 10:45:05.471416  399286 command_runner.go:130] > # imagestore = ""
	I1206 10:45:05.471447  399286 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1206 10:45:05.471467  399286 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1206 10:45:05.471498  399286 command_runner.go:130] > # storage_driver = "overlay"
	I1206 10:45:05.471527  399286 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1206 10:45:05.471540  399286 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1206 10:45:05.471544  399286 command_runner.go:130] > # storage_option = [
	I1206 10:45:05.471548  399286 command_runner.go:130] > # ]
	I1206 10:45:05.471554  399286 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1206 10:45:05.471561  399286 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1206 10:45:05.471566  399286 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1206 10:45:05.471572  399286 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1206 10:45:05.471584  399286 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1206 10:45:05.471601  399286 command_runner.go:130] > # always happen on a node reboot
	I1206 10:45:05.471614  399286 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1206 10:45:05.471624  399286 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1206 10:45:05.471631  399286 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1206 10:45:05.471647  399286 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1206 10:45:05.471665  399286 command_runner.go:130] > # version_file_persist = ""
	I1206 10:45:05.471674  399286 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1206 10:45:05.471685  399286 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1206 10:45:05.471689  399286 command_runner.go:130] > # internal_wipe = true
	I1206 10:45:05.471701  399286 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1206 10:45:05.471736  399286 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1206 10:45:05.471747  399286 command_runner.go:130] > # internal_repair = true
	I1206 10:45:05.471753  399286 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1206 10:45:05.471760  399286 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1206 10:45:05.471768  399286 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1206 10:45:05.471774  399286 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1206 10:45:05.471790  399286 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1206 10:45:05.471793  399286 command_runner.go:130] > [crio.api]
	I1206 10:45:05.471799  399286 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1206 10:45:05.471810  399286 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1206 10:45:05.471817  399286 command_runner.go:130] > # IP address on which the stream server will listen.
	I1206 10:45:05.471822  399286 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1206 10:45:05.471829  399286 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1206 10:45:05.471837  399286 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1206 10:45:05.471841  399286 command_runner.go:130] > # stream_port = "0"
	I1206 10:45:05.471852  399286 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1206 10:45:05.471856  399286 command_runner.go:130] > # stream_enable_tls = false
	I1206 10:45:05.471867  399286 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1206 10:45:05.471871  399286 command_runner.go:130] > # stream_idle_timeout = ""
	I1206 10:45:05.471891  399286 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1206 10:45:05.471897  399286 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1206 10:45:05.471905  399286 command_runner.go:130] > # stream_tls_cert = ""
	I1206 10:45:05.471912  399286 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1206 10:45:05.471918  399286 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1206 10:45:05.471922  399286 command_runner.go:130] > # stream_tls_key = ""
	I1206 10:45:05.471928  399286 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1206 10:45:05.471937  399286 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1206 10:45:05.471942  399286 command_runner.go:130] > # automatically pick up the changes.
	I1206 10:45:05.471950  399286 command_runner.go:130] > # stream_tls_ca = ""
	I1206 10:45:05.471981  399286 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1206 10:45:05.471991  399286 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1206 10:45:05.471999  399286 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1206 10:45:05.472004  399286 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1206 10:45:05.472010  399286 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1206 10:45:05.472018  399286 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1206 10:45:05.472022  399286 command_runner.go:130] > [crio.runtime]
	I1206 10:45:05.472029  399286 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1206 10:45:05.472036  399286 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1206 10:45:05.472041  399286 command_runner.go:130] > # "nofile=1024:2048"
	I1206 10:45:05.472057  399286 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1206 10:45:05.472061  399286 command_runner.go:130] > # default_ulimits = [
	I1206 10:45:05.472064  399286 command_runner.go:130] > # ]
	I1206 10:45:05.472070  399286 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1206 10:45:05.472077  399286 command_runner.go:130] > # no_pivot = false
	I1206 10:45:05.472083  399286 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1206 10:45:05.472090  399286 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1206 10:45:05.472095  399286 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1206 10:45:05.472103  399286 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1206 10:45:05.472108  399286 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1206 10:45:05.472117  399286 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1206 10:45:05.472123  399286 command_runner.go:130] > # conmon = ""
	I1206 10:45:05.472127  399286 command_runner.go:130] > # Cgroup setting for conmon
	I1206 10:45:05.472137  399286 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1206 10:45:05.472143  399286 command_runner.go:130] > conmon_cgroup = "pod"
	I1206 10:45:05.472152  399286 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1206 10:45:05.472157  399286 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1206 10:45:05.472164  399286 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1206 10:45:05.472168  399286 command_runner.go:130] > # conmon_env = [
	I1206 10:45:05.472173  399286 command_runner.go:130] > # ]
	I1206 10:45:05.472180  399286 command_runner.go:130] > # Additional environment variables to set for all the
	I1206 10:45:05.472188  399286 command_runner.go:130] > # containers. These are overridden if set in the
	I1206 10:45:05.472198  399286 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1206 10:45:05.472204  399286 command_runner.go:130] > # default_env = [
	I1206 10:45:05.472208  399286 command_runner.go:130] > # ]
	I1206 10:45:05.472213  399286 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1206 10:45:05.472223  399286 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1206 10:45:05.472229  399286 command_runner.go:130] > # selinux = false
	I1206 10:45:05.472236  399286 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1206 10:45:05.472246  399286 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1206 10:45:05.472252  399286 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:45:05.472255  399286 command_runner.go:130] > # seccomp_profile = ""
	I1206 10:45:05.472262  399286 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1206 10:45:05.472270  399286 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:45:05.472274  399286 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1206 10:45:05.472281  399286 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1206 10:45:05.472287  399286 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1206 10:45:05.472295  399286 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1206 10:45:05.472302  399286 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1206 10:45:05.472315  399286 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:45:05.472320  399286 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1206 10:45:05.472326  399286 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1206 10:45:05.472330  399286 command_runner.go:130] > # the cgroup blockio controller.
	I1206 10:45:05.472337  399286 command_runner.go:130] > # blockio_config_file = ""
	I1206 10:45:05.472345  399286 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1206 10:45:05.472353  399286 command_runner.go:130] > # blockio parameters.
	I1206 10:45:05.472357  399286 command_runner.go:130] > # blockio_reload = false
	I1206 10:45:05.472364  399286 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1206 10:45:05.472367  399286 command_runner.go:130] > # irqbalance daemon.
	I1206 10:45:05.472373  399286 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1206 10:45:05.472381  399286 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1206 10:45:05.472391  399286 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1206 10:45:05.472412  399286 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1206 10:45:05.472419  399286 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1206 10:45:05.472428  399286 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1206 10:45:05.472437  399286 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:45:05.472448  399286 command_runner.go:130] > # rdt_config_file = ""
	I1206 10:45:05.472455  399286 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1206 10:45:05.472459  399286 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1206 10:45:05.472465  399286 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1206 10:45:05.472472  399286 command_runner.go:130] > # separate_pull_cgroup = ""
	I1206 10:45:05.472479  399286 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1206 10:45:05.472486  399286 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1206 10:45:05.472498  399286 command_runner.go:130] > # will be added.
	I1206 10:45:05.472503  399286 command_runner.go:130] > # default_capabilities = [
	I1206 10:45:05.472506  399286 command_runner.go:130] > # 	"CHOWN",
	I1206 10:45:05.472510  399286 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1206 10:45:05.472520  399286 command_runner.go:130] > # 	"FSETID",
	I1206 10:45:05.472525  399286 command_runner.go:130] > # 	"FOWNER",
	I1206 10:45:05.472529  399286 command_runner.go:130] > # 	"SETGID",
	I1206 10:45:05.472539  399286 command_runner.go:130] > # 	"SETUID",
	I1206 10:45:05.472558  399286 command_runner.go:130] > # 	"SETPCAP",
	I1206 10:45:05.472573  399286 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1206 10:45:05.472576  399286 command_runner.go:130] > # 	"KILL",
	I1206 10:45:05.472579  399286 command_runner.go:130] > # ]
	I1206 10:45:05.472587  399286 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1206 10:45:05.472602  399286 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1206 10:45:05.472607  399286 command_runner.go:130] > # add_inheritable_capabilities = false
	I1206 10:45:05.472616  399286 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1206 10:45:05.472628  399286 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1206 10:45:05.472632  399286 command_runner.go:130] > default_sysctls = [
	I1206 10:45:05.472637  399286 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1206 10:45:05.472643  399286 command_runner.go:130] > ]
	I1206 10:45:05.472650  399286 command_runner.go:130] > # List of devices on the host that a
	I1206 10:45:05.472660  399286 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1206 10:45:05.472664  399286 command_runner.go:130] > # allowed_devices = [
	I1206 10:45:05.472670  399286 command_runner.go:130] > # 	"/dev/fuse",
	I1206 10:45:05.472674  399286 command_runner.go:130] > # 	"/dev/net/tun",
	I1206 10:45:05.472681  399286 command_runner.go:130] > # ]
	I1206 10:45:05.472689  399286 command_runner.go:130] > # List of additional devices. specified as
	I1206 10:45:05.472697  399286 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1206 10:45:05.472703  399286 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1206 10:45:05.472711  399286 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1206 10:45:05.472716  399286 command_runner.go:130] > # additional_devices = [
	I1206 10:45:05.472722  399286 command_runner.go:130] > # ]
	I1206 10:45:05.472730  399286 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1206 10:45:05.472737  399286 command_runner.go:130] > # cdi_spec_dirs = [
	I1206 10:45:05.472743  399286 command_runner.go:130] > # 	"/etc/cdi",
	I1206 10:45:05.472747  399286 command_runner.go:130] > # 	"/var/run/cdi",
	I1206 10:45:05.472750  399286 command_runner.go:130] > # ]
	I1206 10:45:05.472757  399286 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1206 10:45:05.472766  399286 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1206 10:45:05.472770  399286 command_runner.go:130] > # Defaults to false.
	I1206 10:45:05.472775  399286 command_runner.go:130] > # device_ownership_from_security_context = false
	I1206 10:45:05.472782  399286 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1206 10:45:05.472791  399286 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1206 10:45:05.472795  399286 command_runner.go:130] > # hooks_dir = [
	I1206 10:45:05.472800  399286 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1206 10:45:05.472806  399286 command_runner.go:130] > # ]
	I1206 10:45:05.472813  399286 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1206 10:45:05.472819  399286 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1206 10:45:05.472827  399286 command_runner.go:130] > # its default mounts from the following two files:
	I1206 10:45:05.472830  399286 command_runner.go:130] > #
	I1206 10:45:05.472836  399286 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1206 10:45:05.472845  399286 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1206 10:45:05.472852  399286 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1206 10:45:05.472858  399286 command_runner.go:130] > #
	I1206 10:45:05.472865  399286 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1206 10:45:05.472871  399286 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1206 10:45:05.472878  399286 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1206 10:45:05.472887  399286 command_runner.go:130] > #      only add mounts it finds in this file.
	I1206 10:45:05.472896  399286 command_runner.go:130] > #
	I1206 10:45:05.472902  399286 command_runner.go:130] > # default_mounts_file = ""
	I1206 10:45:05.472910  399286 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1206 10:45:05.472919  399286 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1206 10:45:05.472932  399286 command_runner.go:130] > # pids_limit = -1
	I1206 10:45:05.472938  399286 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1206 10:45:05.472947  399286 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1206 10:45:05.472961  399286 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1206 10:45:05.472979  399286 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1206 10:45:05.472983  399286 command_runner.go:130] > # log_size_max = -1
	I1206 10:45:05.472990  399286 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1206 10:45:05.472997  399286 command_runner.go:130] > # log_to_journald = false
	I1206 10:45:05.473006  399286 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1206 10:45:05.473011  399286 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1206 10:45:05.473016  399286 command_runner.go:130] > # Path to directory for container attach sockets.
	I1206 10:45:05.473024  399286 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1206 10:45:05.473032  399286 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1206 10:45:05.473036  399286 command_runner.go:130] > # bind_mount_prefix = ""
	I1206 10:45:05.473044  399286 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1206 10:45:05.473049  399286 command_runner.go:130] > # read_only = false
	I1206 10:45:05.473063  399286 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1206 10:45:05.473070  399286 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1206 10:45:05.473074  399286 command_runner.go:130] > # live configuration reload.
	I1206 10:45:05.473085  399286 command_runner.go:130] > # log_level = "info"
	I1206 10:45:05.473092  399286 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1206 10:45:05.473097  399286 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:45:05.473101  399286 command_runner.go:130] > # log_filter = ""
	I1206 10:45:05.473110  399286 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1206 10:45:05.473119  399286 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1206 10:45:05.473123  399286 command_runner.go:130] > # separated by comma.
	I1206 10:45:05.473132  399286 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1206 10:45:05.473138  399286 command_runner.go:130] > # uid_mappings = ""
	I1206 10:45:05.473145  399286 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1206 10:45:05.473155  399286 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1206 10:45:05.473162  399286 command_runner.go:130] > # separated by comma.
	I1206 10:45:05.473171  399286 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1206 10:45:05.473178  399286 command_runner.go:130] > # gid_mappings = ""
	I1206 10:45:05.473185  399286 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1206 10:45:05.473197  399286 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1206 10:45:05.473206  399286 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1206 10:45:05.473217  399286 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1206 10:45:05.473223  399286 command_runner.go:130] > # minimum_mappable_uid = -1
	I1206 10:45:05.473230  399286 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1206 10:45:05.473238  399286 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1206 10:45:05.473249  399286 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1206 10:45:05.473260  399286 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1206 10:45:05.473264  399286 command_runner.go:130] > # minimum_mappable_gid = -1
	I1206 10:45:05.473270  399286 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1206 10:45:05.473282  399286 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1206 10:45:05.473287  399286 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1206 10:45:05.473292  399286 command_runner.go:130] > # ctr_stop_timeout = 30
	I1206 10:45:05.473298  399286 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1206 10:45:05.473307  399286 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1206 10:45:05.473312  399286 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1206 10:45:05.473317  399286 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1206 10:45:05.473323  399286 command_runner.go:130] > # drop_infra_ctr = true
	I1206 10:45:05.473330  399286 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1206 10:45:05.473339  399286 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1206 10:45:05.473347  399286 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1206 10:45:05.473351  399286 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1206 10:45:05.473362  399286 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1206 10:45:05.473373  399286 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1206 10:45:05.473378  399286 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1206 10:45:05.473383  399286 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1206 10:45:05.473389  399286 command_runner.go:130] > # shared_cpuset = ""
	I1206 10:45:05.473397  399286 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1206 10:45:05.473408  399286 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1206 10:45:05.473415  399286 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1206 10:45:05.473423  399286 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1206 10:45:05.473429  399286 command_runner.go:130] > # pinns_path = ""
	I1206 10:45:05.473435  399286 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1206 10:45:05.473442  399286 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1206 10:45:05.473446  399286 command_runner.go:130] > # enable_criu_support = true
	I1206 10:45:05.473458  399286 command_runner.go:130] > # Enable/disable the generation of the container,
	I1206 10:45:05.473465  399286 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1206 10:45:05.473469  399286 command_runner.go:130] > # enable_pod_events = false
	I1206 10:45:05.473476  399286 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1206 10:45:05.473483  399286 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1206 10:45:05.473487  399286 command_runner.go:130] > # default_runtime = "crun"
	I1206 10:45:05.473492  399286 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1206 10:45:05.473502  399286 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1206 10:45:05.473513  399286 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1206 10:45:05.473521  399286 command_runner.go:130] > # creation as a file is not desired either.
	I1206 10:45:05.473531  399286 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1206 10:45:05.473540  399286 command_runner.go:130] > # the hostname is being managed dynamically.
	I1206 10:45:05.473551  399286 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1206 10:45:05.473554  399286 command_runner.go:130] > # ]
	I1206 10:45:05.473561  399286 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1206 10:45:05.473567  399286 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1206 10:45:05.473576  399286 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1206 10:45:05.473582  399286 command_runner.go:130] > # Each entry in the table should follow the format:
	I1206 10:45:05.473596  399286 command_runner.go:130] > #
	I1206 10:45:05.473602  399286 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1206 10:45:05.473606  399286 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1206 10:45:05.473610  399286 command_runner.go:130] > # runtime_type = "oci"
	I1206 10:45:05.473616  399286 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1206 10:45:05.473623  399286 command_runner.go:130] > # inherit_default_runtime = false
	I1206 10:45:05.473628  399286 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1206 10:45:05.473632  399286 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1206 10:45:05.473646  399286 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1206 10:45:05.473650  399286 command_runner.go:130] > # monitor_env = []
	I1206 10:45:05.473654  399286 command_runner.go:130] > # privileged_without_host_devices = false
	I1206 10:45:05.473659  399286 command_runner.go:130] > # allowed_annotations = []
	I1206 10:45:05.473667  399286 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1206 10:45:05.473673  399286 command_runner.go:130] > # no_sync_log = false
	I1206 10:45:05.473677  399286 command_runner.go:130] > # default_annotations = {}
	I1206 10:45:05.473682  399286 command_runner.go:130] > # stream_websockets = false
	I1206 10:45:05.473689  399286 command_runner.go:130] > # seccomp_profile = ""
	I1206 10:45:05.473708  399286 command_runner.go:130] > # Where:
	I1206 10:45:05.473717  399286 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1206 10:45:05.473724  399286 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1206 10:45:05.473730  399286 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1206 10:45:05.473739  399286 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1206 10:45:05.473743  399286 command_runner.go:130] > #   in $PATH.
	I1206 10:45:05.473749  399286 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1206 10:45:05.473754  399286 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1206 10:45:05.473763  399286 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1206 10:45:05.473768  399286 command_runner.go:130] > #   state.
	I1206 10:45:05.473775  399286 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1206 10:45:05.473789  399286 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1206 10:45:05.473796  399286 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1206 10:45:05.473802  399286 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1206 10:45:05.473810  399286 command_runner.go:130] > #   the values from the default runtime on load time.
	I1206 10:45:05.473816  399286 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1206 10:45:05.473824  399286 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1206 10:45:05.473834  399286 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1206 10:45:05.473841  399286 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1206 10:45:05.473846  399286 command_runner.go:130] > #   The currently recognized values are:
	I1206 10:45:05.473852  399286 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1206 10:45:05.473862  399286 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1206 10:45:05.473868  399286 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1206 10:45:05.473876  399286 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1206 10:45:05.473890  399286 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1206 10:45:05.473900  399286 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1206 10:45:05.473907  399286 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1206 10:45:05.473914  399286 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1206 10:45:05.473924  399286 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1206 10:45:05.473930  399286 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1206 10:45:05.473938  399286 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1206 10:45:05.473946  399286 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1206 10:45:05.473955  399286 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1206 10:45:05.473961  399286 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1206 10:45:05.473970  399286 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1206 10:45:05.473978  399286 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1206 10:45:05.473988  399286 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1206 10:45:05.473992  399286 command_runner.go:130] > #   deprecated option "conmon".
	I1206 10:45:05.474000  399286 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1206 10:45:05.474008  399286 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1206 10:45:05.474015  399286 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1206 10:45:05.474020  399286 command_runner.go:130] > #   should be moved to the container's cgroup
	I1206 10:45:05.474027  399286 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1206 10:45:05.474034  399286 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1206 10:45:05.474042  399286 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1206 10:45:05.474048  399286 command_runner.go:130] > #   conmon-rs by using:
	I1206 10:45:05.474057  399286 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1206 10:45:05.474070  399286 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1206 10:45:05.474077  399286 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1206 10:45:05.474091  399286 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1206 10:45:05.474096  399286 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1206 10:45:05.474106  399286 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1206 10:45:05.474114  399286 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1206 10:45:05.474122  399286 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1206 10:45:05.474130  399286 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1206 10:45:05.474143  399286 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1206 10:45:05.474148  399286 command_runner.go:130] > #   when a machine crash happens.
	I1206 10:45:05.474159  399286 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1206 10:45:05.474172  399286 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1206 10:45:05.474181  399286 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1206 10:45:05.474188  399286 command_runner.go:130] > #   seccomp profile for the runtime.
	I1206 10:45:05.474212  399286 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1206 10:45:05.474223  399286 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1206 10:45:05.474227  399286 command_runner.go:130] > #
	I1206 10:45:05.474233  399286 command_runner.go:130] > # Using the seccomp notifier feature:
	I1206 10:45:05.474236  399286 command_runner.go:130] > #
	I1206 10:45:05.474244  399286 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1206 10:45:05.474254  399286 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1206 10:45:05.474257  399286 command_runner.go:130] > #
	I1206 10:45:05.474264  399286 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1206 10:45:05.474273  399286 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1206 10:45:05.474276  399286 command_runner.go:130] > #
	I1206 10:45:05.474283  399286 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1206 10:45:05.474286  399286 command_runner.go:130] > # feature.
	I1206 10:45:05.474289  399286 command_runner.go:130] > #
	I1206 10:45:05.474299  399286 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1206 10:45:05.474307  399286 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1206 10:45:05.474314  399286 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1206 10:45:05.474322  399286 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1206 10:45:05.474329  399286 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1206 10:45:05.474336  399286 command_runner.go:130] > #
	I1206 10:45:05.474344  399286 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1206 10:45:05.474350  399286 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1206 10:45:05.474354  399286 command_runner.go:130] > #
	I1206 10:45:05.474361  399286 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1206 10:45:05.474371  399286 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1206 10:45:05.474374  399286 command_runner.go:130] > #
	I1206 10:45:05.474380  399286 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1206 10:45:05.474386  399286 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1206 10:45:05.474392  399286 command_runner.go:130] > # limitation.
	I1206 10:45:05.474401  399286 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1206 10:45:05.474409  399286 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1206 10:45:05.474413  399286 command_runner.go:130] > runtime_type = ""
	I1206 10:45:05.474417  399286 command_runner.go:130] > runtime_root = "/run/crun"
	I1206 10:45:05.474422  399286 command_runner.go:130] > inherit_default_runtime = false
	I1206 10:45:05.474426  399286 command_runner.go:130] > runtime_config_path = ""
	I1206 10:45:05.474432  399286 command_runner.go:130] > container_min_memory = ""
	I1206 10:45:05.474437  399286 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1206 10:45:05.474442  399286 command_runner.go:130] > monitor_cgroup = "pod"
	I1206 10:45:05.474448  399286 command_runner.go:130] > monitor_exec_cgroup = ""
	I1206 10:45:05.474453  399286 command_runner.go:130] > allowed_annotations = [
	I1206 10:45:05.474461  399286 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1206 10:45:05.474464  399286 command_runner.go:130] > ]
	I1206 10:45:05.474469  399286 command_runner.go:130] > privileged_without_host_devices = false
	I1206 10:45:05.474473  399286 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1206 10:45:05.474478  399286 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1206 10:45:05.474484  399286 command_runner.go:130] > runtime_type = ""
	I1206 10:45:05.474489  399286 command_runner.go:130] > runtime_root = "/run/runc"
	I1206 10:45:05.474496  399286 command_runner.go:130] > inherit_default_runtime = false
	I1206 10:45:05.474501  399286 command_runner.go:130] > runtime_config_path = ""
	I1206 10:45:05.474506  399286 command_runner.go:130] > container_min_memory = ""
	I1206 10:45:05.474513  399286 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1206 10:45:05.474518  399286 command_runner.go:130] > monitor_cgroup = "pod"
	I1206 10:45:05.474522  399286 command_runner.go:130] > monitor_exec_cgroup = ""
	I1206 10:45:05.474530  399286 command_runner.go:130] > privileged_without_host_devices = false
	I1206 10:45:05.474540  399286 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1206 10:45:05.474548  399286 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1206 10:45:05.474556  399286 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1206 10:45:05.474564  399286 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1206 10:45:05.474575  399286 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1206 10:45:05.474592  399286 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1206 10:45:05.474602  399286 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1206 10:45:05.474610  399286 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1206 10:45:05.474622  399286 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1206 10:45:05.474635  399286 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1206 10:45:05.474642  399286 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1206 10:45:05.474652  399286 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1206 10:45:05.474656  399286 command_runner.go:130] > # Example:
	I1206 10:45:05.474664  399286 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1206 10:45:05.474672  399286 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1206 10:45:05.474677  399286 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1206 10:45:05.474686  399286 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1206 10:45:05.474691  399286 command_runner.go:130] > # cpuset = "0-1"
	I1206 10:45:05.474703  399286 command_runner.go:130] > # cpushares = "5"
	I1206 10:45:05.474708  399286 command_runner.go:130] > # cpuquota = "1000"
	I1206 10:45:05.474712  399286 command_runner.go:130] > # cpuperiod = "100000"
	I1206 10:45:05.474716  399286 command_runner.go:130] > # cpulimit = "35"
	I1206 10:45:05.474720  399286 command_runner.go:130] > # Where:
	I1206 10:45:05.474724  399286 command_runner.go:130] > # The workload name is workload-type.
	I1206 10:45:05.474738  399286 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1206 10:45:05.474744  399286 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1206 10:45:05.474749  399286 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1206 10:45:05.474761  399286 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1206 10:45:05.474777  399286 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1206 10:45:05.474783  399286 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1206 10:45:05.474790  399286 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1206 10:45:05.474797  399286 command_runner.go:130] > # Default value is set to true
	I1206 10:45:05.474803  399286 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1206 10:45:05.474809  399286 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1206 10:45:05.474821  399286 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1206 10:45:05.474826  399286 command_runner.go:130] > # Default value is set to 'false'
	I1206 10:45:05.474830  399286 command_runner.go:130] > # disable_hostport_mapping = false
	I1206 10:45:05.474836  399286 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1206 10:45:05.474847  399286 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1206 10:45:05.474853  399286 command_runner.go:130] > # timezone = ""
	I1206 10:45:05.474860  399286 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1206 10:45:05.474866  399286 command_runner.go:130] > #
	I1206 10:45:05.474874  399286 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1206 10:45:05.474883  399286 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1206 10:45:05.474889  399286 command_runner.go:130] > [crio.image]
	I1206 10:45:05.474895  399286 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1206 10:45:05.474899  399286 command_runner.go:130] > # default_transport = "docker://"
	I1206 10:45:05.474913  399286 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1206 10:45:05.474920  399286 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1206 10:45:05.474924  399286 command_runner.go:130] > # global_auth_file = ""
	I1206 10:45:05.474929  399286 command_runner.go:130] > # The image used to instantiate infra containers.
	I1206 10:45:05.474938  399286 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:45:05.474943  399286 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1206 10:45:05.474952  399286 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1206 10:45:05.474959  399286 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1206 10:45:05.474967  399286 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:45:05.474972  399286 command_runner.go:130] > # pause_image_auth_file = ""
	I1206 10:45:05.474977  399286 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1206 10:45:05.474984  399286 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1206 10:45:05.474994  399286 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1206 10:45:05.475000  399286 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1206 10:45:05.475009  399286 command_runner.go:130] > # pause_command = "/pause"
	I1206 10:45:05.475015  399286 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1206 10:45:05.475021  399286 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1206 10:45:05.475030  399286 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1206 10:45:05.475036  399286 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1206 10:45:05.475044  399286 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1206 10:45:05.475051  399286 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1206 10:45:05.475058  399286 command_runner.go:130] > # pinned_images = [
	I1206 10:45:05.475061  399286 command_runner.go:130] > # ]
	I1206 10:45:05.475067  399286 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1206 10:45:05.475074  399286 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1206 10:45:05.475083  399286 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1206 10:45:05.475090  399286 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1206 10:45:05.475098  399286 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1206 10:45:05.475104  399286 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1206 10:45:05.475110  399286 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1206 10:45:05.475120  399286 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1206 10:45:05.475129  399286 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1206 10:45:05.475138  399286 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1206 10:45:05.475145  399286 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1206 10:45:05.475150  399286 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1206 10:45:05.475156  399286 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1206 10:45:05.475165  399286 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1206 10:45:05.475169  399286 command_runner.go:130] > # changing them here.
	I1206 10:45:05.475176  399286 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1206 10:45:05.475183  399286 command_runner.go:130] > # insecure_registries = [
	I1206 10:45:05.475186  399286 command_runner.go:130] > # ]
	I1206 10:45:05.475193  399286 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1206 10:45:05.475201  399286 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1206 10:45:05.475208  399286 command_runner.go:130] > # image_volumes = "mkdir"
	I1206 10:45:05.475214  399286 command_runner.go:130] > # Temporary directory to use for storing big files
	I1206 10:45:05.475220  399286 command_runner.go:130] > # big_files_temporary_dir = ""
	I1206 10:45:05.475226  399286 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1206 10:45:05.475236  399286 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1206 10:45:05.475241  399286 command_runner.go:130] > # auto_reload_registries = false
	I1206 10:45:05.475247  399286 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1206 10:45:05.475257  399286 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1206 10:45:05.475267  399286 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1206 10:45:05.475271  399286 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1206 10:45:05.475277  399286 command_runner.go:130] > # The mode of short name resolution.
	I1206 10:45:05.475284  399286 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1206 10:45:05.475293  399286 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1206 10:45:05.475298  399286 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1206 10:45:05.475314  399286 command_runner.go:130] > # short_name_mode = "enforcing"
	I1206 10:45:05.475321  399286 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1206 10:45:05.475327  399286 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1206 10:45:05.475335  399286 command_runner.go:130] > # oci_artifact_mount_support = true
	I1206 10:45:05.475343  399286 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1206 10:45:05.475349  399286 command_runner.go:130] > # CNI plugins.
	I1206 10:45:05.475353  399286 command_runner.go:130] > [crio.network]
	I1206 10:45:05.475360  399286 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1206 10:45:05.475368  399286 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1206 10:45:05.475386  399286 command_runner.go:130] > # cni_default_network = ""
	I1206 10:45:05.475398  399286 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1206 10:45:05.475407  399286 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1206 10:45:05.475413  399286 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1206 10:45:05.475419  399286 command_runner.go:130] > # plugin_dirs = [
	I1206 10:45:05.475424  399286 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1206 10:45:05.475429  399286 command_runner.go:130] > # ]
	I1206 10:45:05.475434  399286 command_runner.go:130] > # List of included pod metrics.
	I1206 10:45:05.475441  399286 command_runner.go:130] > # included_pod_metrics = [
	I1206 10:45:05.475445  399286 command_runner.go:130] > # ]
	I1206 10:45:05.475451  399286 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1206 10:45:05.475457  399286 command_runner.go:130] > [crio.metrics]
	I1206 10:45:05.475463  399286 command_runner.go:130] > # Globally enable or disable metrics support.
	I1206 10:45:05.475467  399286 command_runner.go:130] > # enable_metrics = false
	I1206 10:45:05.475472  399286 command_runner.go:130] > # Specify enabled metrics collectors.
	I1206 10:45:05.475476  399286 command_runner.go:130] > # Per default all metrics are enabled.
	I1206 10:45:05.475483  399286 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1206 10:45:05.475490  399286 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1206 10:45:05.475497  399286 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1206 10:45:05.475501  399286 command_runner.go:130] > # metrics_collectors = [
	I1206 10:45:05.475505  399286 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1206 10:45:05.475510  399286 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1206 10:45:05.475518  399286 command_runner.go:130] > # 	"containers_oom_total",
	I1206 10:45:05.475522  399286 command_runner.go:130] > # 	"processes_defunct",
	I1206 10:45:05.475528  399286 command_runner.go:130] > # 	"operations_total",
	I1206 10:45:05.475533  399286 command_runner.go:130] > # 	"operations_latency_seconds",
	I1206 10:45:05.475540  399286 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1206 10:45:05.475547  399286 command_runner.go:130] > # 	"operations_errors_total",
	I1206 10:45:05.475554  399286 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1206 10:45:05.475559  399286 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1206 10:45:05.475564  399286 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1206 10:45:05.475571  399286 command_runner.go:130] > # 	"image_pulls_success_total",
	I1206 10:45:05.475576  399286 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1206 10:45:05.475583  399286 command_runner.go:130] > # 	"containers_oom_count_total",
	I1206 10:45:05.475590  399286 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1206 10:45:05.475602  399286 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1206 10:45:05.475607  399286 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1206 10:45:05.475610  399286 command_runner.go:130] > # ]
	I1206 10:45:05.475616  399286 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1206 10:45:05.475620  399286 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1206 10:45:05.475626  399286 command_runner.go:130] > # The port on which the metrics server will listen.
	I1206 10:45:05.475639  399286 command_runner.go:130] > # metrics_port = 9090
	I1206 10:45:05.475646  399286 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1206 10:45:05.475649  399286 command_runner.go:130] > # metrics_socket = ""
	I1206 10:45:05.475657  399286 command_runner.go:130] > # The certificate for the secure metrics server.
	I1206 10:45:05.475670  399286 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1206 10:45:05.475677  399286 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1206 10:45:05.475691  399286 command_runner.go:130] > # certificate on any modification event.
	I1206 10:45:05.475695  399286 command_runner.go:130] > # metrics_cert = ""
	I1206 10:45:05.475703  399286 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1206 10:45:05.475708  399286 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1206 10:45:05.475712  399286 command_runner.go:130] > # metrics_key = ""
	I1206 10:45:05.475720  399286 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1206 10:45:05.475727  399286 command_runner.go:130] > [crio.tracing]
	I1206 10:45:05.475732  399286 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1206 10:45:05.475737  399286 command_runner.go:130] > # enable_tracing = false
	I1206 10:45:05.475748  399286 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1206 10:45:05.475753  399286 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1206 10:45:05.475767  399286 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1206 10:45:05.475772  399286 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1206 10:45:05.475781  399286 command_runner.go:130] > # CRI-O NRI configuration.
	I1206 10:45:05.475784  399286 command_runner.go:130] > [crio.nri]
	I1206 10:45:05.475789  399286 command_runner.go:130] > # Globally enable or disable NRI.
	I1206 10:45:05.475792  399286 command_runner.go:130] > # enable_nri = true
	I1206 10:45:05.475799  399286 command_runner.go:130] > # NRI socket to listen on.
	I1206 10:45:05.475804  399286 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1206 10:45:05.475811  399286 command_runner.go:130] > # NRI plugin directory to use.
	I1206 10:45:05.475817  399286 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1206 10:45:05.475825  399286 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1206 10:45:05.475830  399286 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1206 10:45:05.475835  399286 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1206 10:45:05.475891  399286 command_runner.go:130] > # nri_disable_connections = false
	I1206 10:45:05.475901  399286 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1206 10:45:05.475906  399286 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1206 10:45:05.475911  399286 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1206 10:45:05.475918  399286 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1206 10:45:05.475923  399286 command_runner.go:130] > # NRI default validator configuration.
	I1206 10:45:05.475933  399286 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1206 10:45:05.475940  399286 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1206 10:45:05.475946  399286 command_runner.go:130] > # can be restricted/rejected:
	I1206 10:45:05.475950  399286 command_runner.go:130] > # - OCI hook injection
	I1206 10:45:05.475958  399286 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1206 10:45:05.475964  399286 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1206 10:45:05.475969  399286 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1206 10:45:05.475976  399286 command_runner.go:130] > # - adjustment of linux namespaces
	I1206 10:45:05.475983  399286 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1206 10:45:05.475990  399286 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1206 10:45:05.476000  399286 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1206 10:45:05.476005  399286 command_runner.go:130] > #
	I1206 10:45:05.476012  399286 command_runner.go:130] > # [crio.nri.default_validator]
	I1206 10:45:05.476020  399286 command_runner.go:130] > # nri_enable_default_validator = false
	I1206 10:45:05.476026  399286 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1206 10:45:05.476035  399286 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1206 10:45:05.476042  399286 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1206 10:45:05.476048  399286 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1206 10:45:05.476056  399286 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1206 10:45:05.476061  399286 command_runner.go:130] > # nri_validator_required_plugins = [
	I1206 10:45:05.476064  399286 command_runner.go:130] > # ]
	I1206 10:45:05.476070  399286 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1206 10:45:05.476079  399286 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1206 10:45:05.476083  399286 command_runner.go:130] > [crio.stats]
	I1206 10:45:05.476089  399286 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1206 10:45:05.476095  399286 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1206 10:45:05.476102  399286 command_runner.go:130] > # stats_collection_period = 0
	I1206 10:45:05.476109  399286 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1206 10:45:05.476119  399286 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1206 10:45:05.476124  399286 command_runner.go:130] > # collection_period = 0
	I1206 10:45:05.476211  399286 cni.go:84] Creating CNI manager for ""
	I1206 10:45:05.476226  399286 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:45:05.476254  399286 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 10:45:05.476282  399286 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-196950 NodeName:functional-196950 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:45:05.476417  399286 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-196950"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:45:05.476505  399286 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 10:45:05.483758  399286 command_runner.go:130] > kubeadm
	I1206 10:45:05.483779  399286 command_runner.go:130] > kubectl
	I1206 10:45:05.483784  399286 command_runner.go:130] > kubelet
	I1206 10:45:05.484784  399286 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:45:05.484852  399286 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:45:05.492924  399286 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1206 10:45:05.506239  399286 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 10:45:05.519506  399286 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1206 10:45:05.533524  399286 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:45:05.537326  399286 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1206 10:45:05.537418  399286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:45:05.647140  399286 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:45:05.721344  399286 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950 for IP: 192.168.49.2
	I1206 10:45:05.721367  399286 certs.go:195] generating shared ca certs ...
	I1206 10:45:05.721384  399286 certs.go:227] acquiring lock for ca certs: {Name:mke2ec61a37b6f3abbcbeb9abd23d6a19d011dd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:45:05.721593  399286 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.key
	I1206 10:45:05.721667  399286 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.key
	I1206 10:45:05.721683  399286 certs.go:257] generating profile certs ...
	I1206 10:45:05.721813  399286 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.key
	I1206 10:45:05.721910  399286 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.key.a77b39a6
	I1206 10:45:05.721994  399286 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.key
	I1206 10:45:05.722034  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1206 10:45:05.722057  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1206 10:45:05.722073  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1206 10:45:05.722118  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1206 10:45:05.722158  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1206 10:45:05.722199  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1206 10:45:05.722217  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1206 10:45:05.722228  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1206 10:45:05.722301  399286 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855.pem (1338 bytes)
	W1206 10:45:05.722365  399286 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855_empty.pem, impossibly tiny 0 bytes
	I1206 10:45:05.722388  399286 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 10:45:05.722448  399286 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem (1082 bytes)
	I1206 10:45:05.722502  399286 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:45:05.722537  399286 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem (1679 bytes)
	I1206 10:45:05.722611  399286 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem (1708 bytes)
	I1206 10:45:05.722670  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:45:05.722691  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855.pem -> /usr/share/ca-certificates/364855.pem
	I1206 10:45:05.722718  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem -> /usr/share/ca-certificates/3648552.pem
	I1206 10:45:05.723349  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:45:05.743026  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 10:45:05.763126  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:45:05.783337  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 10:45:05.802756  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 10:45:05.821457  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 10:45:05.839993  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:45:05.858402  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 10:45:05.876528  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:45:05.894729  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855.pem --> /usr/share/ca-certificates/364855.pem (1338 bytes)
	I1206 10:45:05.912947  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem --> /usr/share/ca-certificates/3648552.pem (1708 bytes)
	I1206 10:45:05.931356  399286 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:45:05.945284  399286 ssh_runner.go:195] Run: openssl version
	I1206 10:45:05.951573  399286 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1206 10:45:05.951648  399286 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:45:05.959293  399286 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:45:05.967114  399286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:45:05.970832  399286 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  6 10:26 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:45:05.971103  399286 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:26 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:45:05.971168  399286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:45:06.014236  399286 command_runner.go:130] > b5213941
	I1206 10:45:06.014768  399286 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:45:06.023097  399286 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364855.pem
	I1206 10:45:06.030984  399286 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364855.pem /etc/ssl/certs/364855.pem
	I1206 10:45:06.039316  399286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364855.pem
	I1206 10:45:06.043457  399286 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  6 10:36 /usr/share/ca-certificates/364855.pem
	I1206 10:45:06.043549  399286 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:36 /usr/share/ca-certificates/364855.pem
	I1206 10:45:06.043624  399286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364855.pem
	I1206 10:45:06.084760  399286 command_runner.go:130] > 51391683
	I1206 10:45:06.084914  399286 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 10:45:06.092772  399286 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648552.pem
	I1206 10:45:06.100248  399286 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648552.pem /etc/ssl/certs/3648552.pem
	I1206 10:45:06.107970  399286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648552.pem
	I1206 10:45:06.112031  399286 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  6 10:36 /usr/share/ca-certificates/3648552.pem
	I1206 10:45:06.112134  399286 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:36 /usr/share/ca-certificates/3648552.pem
	I1206 10:45:06.112229  399286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648552.pem
	I1206 10:45:06.152822  399286 command_runner.go:130] > 3ec20f2e
	I1206 10:45:06.153315  399286 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 10:45:06.161105  399286 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:45:06.165043  399286 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:45:06.165068  399286 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1206 10:45:06.165075  399286 command_runner.go:130] > Device: 259,1	Inode: 1826360     Links: 1
	I1206 10:45:06.165081  399286 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 10:45:06.165087  399286 command_runner.go:130] > Access: 2025-12-06 10:40:58.003190996 +0000
	I1206 10:45:06.165092  399286 command_runner.go:130] > Modify: 2025-12-06 10:36:53.916464205 +0000
	I1206 10:45:06.165098  399286 command_runner.go:130] > Change: 2025-12-06 10:36:53.916464205 +0000
	I1206 10:45:06.165103  399286 command_runner.go:130] >  Birth: 2025-12-06 10:36:53.916464205 +0000
	I1206 10:45:06.165195  399286 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 10:45:06.207365  399286 command_runner.go:130] > Certificate will not expire
	I1206 10:45:06.207850  399286 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 10:45:06.248448  399286 command_runner.go:130] > Certificate will not expire
	I1206 10:45:06.248932  399286 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 10:45:06.289656  399286 command_runner.go:130] > Certificate will not expire
	I1206 10:45:06.290116  399286 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 10:45:06.330828  399286 command_runner.go:130] > Certificate will not expire
	I1206 10:45:06.331412  399286 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 10:45:06.372096  399286 command_runner.go:130] > Certificate will not expire
	I1206 10:45:06.372595  399286 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 10:45:06.413596  399286 command_runner.go:130] > Certificate will not expire
	I1206 10:45:06.414056  399286 kubeadm.go:401] StartCluster: {Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:45:06.414151  399286 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:45:06.414217  399286 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:45:06.442677  399286 cri.go:89] found id: ""
	I1206 10:45:06.442751  399286 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:45:06.449938  399286 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1206 10:45:06.449962  399286 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1206 10:45:06.449969  399286 command_runner.go:130] > /var/lib/minikube/etcd:
	I1206 10:45:06.450931  399286 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 10:45:06.450952  399286 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 10:45:06.451032  399286 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 10:45:06.459080  399286 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:45:06.459618  399286 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-196950" does not appear in /home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 10:45:06.459742  399286 kubeconfig.go:62] /home/jenkins/minikube-integration/22047-362985/kubeconfig needs updating (will repair): [kubeconfig missing "functional-196950" cluster setting kubeconfig missing "functional-196950" context setting]
	I1206 10:45:06.460016  399286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/kubeconfig: {Name:mk779651834cfbdc6f0b5e8f5a9abc0f05106181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:45:06.460484  399286 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 10:45:06.460638  399286 kapi.go:59] client config for functional-196950: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.crt", KeyFile:"/home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.key", CAFile:"/home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 10:45:06.461238  399286 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1206 10:45:06.461268  399286 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1206 10:45:06.461280  399286 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1206 10:45:06.461291  399286 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1206 10:45:06.461295  399286 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1206 10:45:06.461337  399286 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1206 10:45:06.461637  399286 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 10:45:06.473548  399286 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1206 10:45:06.473584  399286 kubeadm.go:602] duration metric: took 22.626231ms to restartPrimaryControlPlane
	I1206 10:45:06.473594  399286 kubeadm.go:403] duration metric: took 59.544914ms to StartCluster
	I1206 10:45:06.473609  399286 settings.go:142] acquiring lock: {Name:mk789e01bfd4ab9fa1e2a8415fa99b570b26926a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:45:06.473671  399286 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 10:45:06.474312  399286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/kubeconfig: {Name:mk779651834cfbdc6f0b5e8f5a9abc0f05106181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:45:06.474518  399286 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 10:45:06.474963  399286 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 10:45:06.475042  399286 addons.go:70] Setting storage-provisioner=true in profile "functional-196950"
	I1206 10:45:06.475066  399286 addons.go:239] Setting addon storage-provisioner=true in "functional-196950"
	I1206 10:45:06.475092  399286 host.go:66] Checking if "functional-196950" exists ...
	I1206 10:45:06.475912  399286 cli_runner.go:164] Run: docker container inspect functional-196950 --format={{.State.Status}}
	I1206 10:45:06.476264  399286 config.go:182] Loaded profile config "functional-196950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:45:06.476354  399286 addons.go:70] Setting default-storageclass=true in profile "functional-196950"
	I1206 10:45:06.476394  399286 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-196950"
	I1206 10:45:06.476791  399286 cli_runner.go:164] Run: docker container inspect functional-196950 --format={{.State.Status}}
	I1206 10:45:06.481213  399286 out.go:179] * Verifying Kubernetes components...
	I1206 10:45:06.484465  399286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:45:06.517764  399286 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 10:45:06.517930  399286 kapi.go:59] client config for functional-196950: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.crt", KeyFile:"/home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.key", CAFile:"/home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 10:45:06.518202  399286 addons.go:239] Setting addon default-storageclass=true in "functional-196950"
	I1206 10:45:06.518232  399286 host.go:66] Checking if "functional-196950" exists ...
	I1206 10:45:06.518684  399286 cli_runner.go:164] Run: docker container inspect functional-196950 --format={{.State.Status}}
	I1206 10:45:06.522254  399286 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 10:45:06.525206  399286 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:06.525232  399286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 10:45:06.525299  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:06.551517  399286 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:06.551540  399286 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 10:45:06.551605  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:06.570954  399286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:45:06.593327  399286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:45:06.685314  399286 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:45:06.722168  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:06.737572  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:07.472063  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:07.472098  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:07.472124  399286 retry.go:31] will retry after 153.213078ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:07.472168  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:07.472179  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:07.472186  399286 retry.go:31] will retry after 247.840204ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:07.472279  399286 node_ready.go:35] waiting up to 6m0s for node "functional-196950" to be "Ready" ...
	I1206 10:45:07.472418  399286 type.go:168] "Request Body" body=""
	I1206 10:45:07.472509  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:07.472828  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:07.626184  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:07.684274  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:07.688010  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:07.688045  399286 retry.go:31] will retry after 503.005947ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:07.720209  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:07.781565  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:07.785057  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:07.785089  399286 retry.go:31] will retry after 443.254463ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:07.973439  399286 type.go:168] "Request Body" body=""
	I1206 10:45:07.973529  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:07.974023  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:08.191658  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:08.229200  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:08.274450  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:08.282645  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:08.282730  399286 retry.go:31] will retry after 342.048952ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:08.327096  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:08.327147  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:08.327166  399286 retry.go:31] will retry after 504.811759ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:08.473470  399286 type.go:168] "Request Body" body=""
	I1206 10:45:08.473573  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:08.473913  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:08.625427  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:08.684176  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:08.687968  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:08.688010  399286 retry.go:31] will retry after 1.261411242s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:08.832256  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:08.891180  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:08.894801  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:08.894836  399286 retry.go:31] will retry after 546.340513ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:08.973077  399286 type.go:168] "Request Body" body=""
	I1206 10:45:08.973155  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:08.973522  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:09.442273  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:09.472729  399286 type.go:168] "Request Body" body=""
	I1206 10:45:09.472803  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:09.473092  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:09.473139  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:09.506571  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:09.510870  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:09.510955  399286 retry.go:31] will retry after 985.837399ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:09.950606  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:09.973212  399286 type.go:168] "Request Body" body=""
	I1206 10:45:09.973298  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:09.973577  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:10.030286  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:10.030402  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:10.030452  399286 retry.go:31] will retry after 829.97822ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:10.472519  399286 type.go:168] "Request Body" body=""
	I1206 10:45:10.472588  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:10.472971  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:10.497156  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:10.582698  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:10.582757  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:10.582779  399286 retry.go:31] will retry after 2.303396874s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:10.861265  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:10.923027  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:10.923124  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:10.923150  399286 retry.go:31] will retry after 2.722563752s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:10.973315  399286 type.go:168] "Request Body" body=""
	I1206 10:45:10.973396  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:10.973700  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:11.473530  399286 type.go:168] "Request Body" body=""
	I1206 10:45:11.473608  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:11.474011  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:11.474073  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:11.972906  399286 type.go:168] "Request Body" body=""
	I1206 10:45:11.972979  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:11.973246  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:12.472617  399286 type.go:168] "Request Body" body=""
	I1206 10:45:12.472696  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:12.473071  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:12.886451  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:12.946418  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:12.951114  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:12.951151  399286 retry.go:31] will retry after 2.435253477s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:12.973196  399286 type.go:168] "Request Body" body=""
	I1206 10:45:12.973267  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:12.973628  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:13.473384  399286 type.go:168] "Request Body" body=""
	I1206 10:45:13.473455  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:13.473719  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:13.646250  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:13.707346  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:13.707418  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:13.707442  399286 retry.go:31] will retry after 2.81497333s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:13.972564  399286 type.go:168] "Request Body" body=""
	I1206 10:45:13.972648  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:13.972980  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:13.973040  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:14.472608  399286 type.go:168] "Request Body" body=""
	I1206 10:45:14.472684  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:14.473066  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:14.972534  399286 type.go:168] "Request Body" body=""
	I1206 10:45:14.972625  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:14.972955  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:15.386668  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:15.447515  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:15.447555  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:15.447573  399286 retry.go:31] will retry after 2.327509257s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:15.472847  399286 type.go:168] "Request Body" body=""
	I1206 10:45:15.472922  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:15.473272  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:15.973226  399286 type.go:168] "Request Body" body=""
	I1206 10:45:15.973305  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:15.973654  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:15.973708  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:16.473465  399286 type.go:168] "Request Body" body=""
	I1206 10:45:16.473539  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:16.473810  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:16.523188  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:16.580568  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:16.584128  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:16.584161  399286 retry.go:31] will retry after 3.565207529s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:16.972816  399286 type.go:168] "Request Body" body=""
	I1206 10:45:16.972893  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:16.973236  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:17.472948  399286 type.go:168] "Request Body" body=""
	I1206 10:45:17.473028  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:17.473355  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:17.775942  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:17.833742  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:17.838032  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:17.838073  399286 retry.go:31] will retry after 9.046125485s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:17.973259  399286 type.go:168] "Request Body" body=""
	I1206 10:45:17.973333  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:17.973605  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:18.473464  399286 type.go:168] "Request Body" body=""
	I1206 10:45:18.473544  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:18.473887  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:18.473936  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:18.972571  399286 type.go:168] "Request Body" body=""
	I1206 10:45:18.972668  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:18.973005  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:19.472497  399286 type.go:168] "Request Body" body=""
	I1206 10:45:19.472590  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:19.472870  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:19.972598  399286 type.go:168] "Request Body" body=""
	I1206 10:45:19.972674  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:19.972970  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:20.150467  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:20.215833  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:20.215885  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:20.215905  399286 retry.go:31] will retry after 9.222024728s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:20.473247  399286 type.go:168] "Request Body" body=""
	I1206 10:45:20.473322  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:20.473670  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:20.973445  399286 type.go:168] "Request Body" body=""
	I1206 10:45:20.973528  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:20.973801  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:20.973861  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:21.472555  399286 type.go:168] "Request Body" body=""
	I1206 10:45:21.472664  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:21.473020  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:21.972799  399286 type.go:168] "Request Body" body=""
	I1206 10:45:21.972877  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:21.973219  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:22.472497  399286 type.go:168] "Request Body" body=""
	I1206 10:45:22.472576  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:22.472904  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:22.972589  399286 type.go:168] "Request Body" body=""
	I1206 10:45:22.972674  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:22.973015  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:23.472753  399286 type.go:168] "Request Body" body=""
	I1206 10:45:23.472835  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:23.473181  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:23.473243  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:23.972733  399286 type.go:168] "Request Body" body=""
	I1206 10:45:23.972804  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:23.973079  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:24.472742  399286 type.go:168] "Request Body" body=""
	I1206 10:45:24.472825  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:24.473193  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:24.972805  399286 type.go:168] "Request Body" body=""
	I1206 10:45:24.972890  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:24.973299  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:25.473054  399286 type.go:168] "Request Body" body=""
	I1206 10:45:25.473127  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:25.473403  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:25.473453  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:25.972761  399286 type.go:168] "Request Body" body=""
	I1206 10:45:25.972834  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:25.973177  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:26.473056  399286 type.go:168] "Request Body" body=""
	I1206 10:45:26.473132  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:26.473476  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:26.884353  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:26.943184  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:26.947029  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:26.947062  399286 retry.go:31] will retry after 13.756266916s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:26.973239  399286 type.go:168] "Request Body" body=""
	I1206 10:45:26.973309  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:26.973589  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:27.473507  399286 type.go:168] "Request Body" body=""
	I1206 10:45:27.473585  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:27.473949  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:27.474006  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:27.972689  399286 type.go:168] "Request Body" body=""
	I1206 10:45:27.972763  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:27.973145  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:28.472835  399286 type.go:168] "Request Body" body=""
	I1206 10:45:28.472909  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:28.473194  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:28.972604  399286 type.go:168] "Request Body" body=""
	I1206 10:45:28.972682  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:28.972972  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:29.438741  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:29.473252  399286 type.go:168] "Request Body" body=""
	I1206 10:45:29.473342  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:29.473619  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:29.500011  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:29.500052  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:29.500073  399286 retry.go:31] will retry after 11.458105653s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:29.972514  399286 type.go:168] "Request Body" body=""
	I1206 10:45:29.972601  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:29.972925  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:29.972975  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:30.472573  399286 type.go:168] "Request Body" body=""
	I1206 10:45:30.472647  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:30.472967  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:30.972595  399286 type.go:168] "Request Body" body=""
	I1206 10:45:30.972703  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:30.973084  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:31.472784  399286 type.go:168] "Request Body" body=""
	I1206 10:45:31.472855  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:31.473199  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:31.972958  399286 type.go:168] "Request Body" body=""
	I1206 10:45:31.973040  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:31.973376  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:31.973432  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:32.473378  399286 type.go:168] "Request Body" body=""
	I1206 10:45:32.473454  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:32.473784  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:32.972445  399286 type.go:168] "Request Body" body=""
	I1206 10:45:32.972534  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:32.972822  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:33.472492  399286 type.go:168] "Request Body" body=""
	I1206 10:45:33.472570  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:33.472871  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:33.972494  399286 type.go:168] "Request Body" body=""
	I1206 10:45:33.972591  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:33.972945  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:34.472578  399286 type.go:168] "Request Body" body=""
	I1206 10:45:34.472650  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:34.473009  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:34.473064  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:34.972732  399286 type.go:168] "Request Body" body=""
	I1206 10:45:34.972808  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:34.973199  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:35.472761  399286 type.go:168] "Request Body" body=""
	I1206 10:45:35.472857  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:35.473192  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:35.972534  399286 type.go:168] "Request Body" body=""
	I1206 10:45:35.972619  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:35.972903  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:36.472813  399286 type.go:168] "Request Body" body=""
	I1206 10:45:36.472898  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:36.473245  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:36.473300  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:36.972930  399286 type.go:168] "Request Body" body=""
	I1206 10:45:36.973016  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:36.973389  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:37.473181  399286 type.go:168] "Request Body" body=""
	I1206 10:45:37.473253  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:37.473531  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:37.973319  399286 type.go:168] "Request Body" body=""
	I1206 10:45:37.973403  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:37.973730  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:38.472498  399286 type.go:168] "Request Body" body=""
	I1206 10:45:38.472583  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:38.472928  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:38.972624  399286 type.go:168] "Request Body" body=""
	I1206 10:45:38.972703  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:38.973126  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:38.973176  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:39.472568  399286 type.go:168] "Request Body" body=""
	I1206 10:45:39.472665  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:39.472987  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:39.972728  399286 type.go:168] "Request Body" body=""
	I1206 10:45:39.972805  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:39.973175  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:40.473384  399286 type.go:168] "Request Body" body=""
	I1206 10:45:40.473456  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:40.473714  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:40.704276  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:40.766032  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:40.766082  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:40.766102  399286 retry.go:31] will retry after 12.834175432s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:40.958402  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:40.972905  399286 type.go:168] "Request Body" body=""
	I1206 10:45:40.972992  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:40.973301  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:40.973353  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:41.030830  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:41.030878  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:41.030900  399286 retry.go:31] will retry after 14.333484689s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:41.472501  399286 type.go:168] "Request Body" body=""
	I1206 10:45:41.472600  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:41.472944  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:41.972853  399286 type.go:168] "Request Body" body=""
	I1206 10:45:41.972920  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:41.973187  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:42.472557  399286 type.go:168] "Request Body" body=""
	I1206 10:45:42.472636  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:42.472968  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:42.972558  399286 type.go:168] "Request Body" body=""
	I1206 10:45:42.972635  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:42.972937  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:43.472504  399286 type.go:168] "Request Body" body=""
	I1206 10:45:43.472579  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:43.472849  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:43.472893  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:43.972555  399286 type.go:168] "Request Body" body=""
	I1206 10:45:43.972629  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:43.972940  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:44.472608  399286 type.go:168] "Request Body" body=""
	I1206 10:45:44.472707  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:44.473088  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:44.972724  399286 type.go:168] "Request Body" body=""
	I1206 10:45:44.972794  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:44.973077  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:45.472782  399286 type.go:168] "Request Body" body=""
	I1206 10:45:45.472865  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:45.473241  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:45.473304  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:45.972826  399286 type.go:168] "Request Body" body=""
	I1206 10:45:45.972906  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:45.973262  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:46.473108  399286 type.go:168] "Request Body" body=""
	I1206 10:45:46.473196  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:46.473467  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:46.973436  399286 type.go:168] "Request Body" body=""
	I1206 10:45:46.973508  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:46.973863  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:47.472551  399286 type.go:168] "Request Body" body=""
	I1206 10:45:47.472626  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:47.472969  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:47.972653  399286 type.go:168] "Request Body" body=""
	I1206 10:45:47.972724  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:47.972985  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:47.973026  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:48.472555  399286 type.go:168] "Request Body" body=""
	I1206 10:45:48.472631  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:48.472979  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:48.972567  399286 type.go:168] "Request Body" body=""
	I1206 10:45:48.972648  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:48.973011  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:49.472610  399286 type.go:168] "Request Body" body=""
	I1206 10:45:49.472682  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:49.473011  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:49.972715  399286 type.go:168] "Request Body" body=""
	I1206 10:45:49.972814  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:49.973135  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:49.973192  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:50.472573  399286 type.go:168] "Request Body" body=""
	I1206 10:45:50.472649  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:50.473004  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:50.972718  399286 type.go:168] "Request Body" body=""
	I1206 10:45:50.972788  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:50.973064  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:51.472737  399286 type.go:168] "Request Body" body=""
	I1206 10:45:51.472812  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:51.473132  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:51.972870  399286 type.go:168] "Request Body" body=""
	I1206 10:45:51.972960  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:51.973314  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:51.973366  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:52.472505  399286 type.go:168] "Request Body" body=""
	I1206 10:45:52.472573  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:52.472847  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:52.972578  399286 type.go:168] "Request Body" body=""
	I1206 10:45:52.972662  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:52.973040  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:53.472618  399286 type.go:168] "Request Body" body=""
	I1206 10:45:53.472697  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:53.473027  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:53.600459  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:53.661736  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:53.665292  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:53.665323  399286 retry.go:31] will retry after 22.486760262s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:53.972617  399286 type.go:168] "Request Body" body=""
	I1206 10:45:53.972697  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:53.972964  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:54.472589  399286 type.go:168] "Request Body" body=""
	I1206 10:45:54.472671  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:54.473035  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:54.473093  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:54.972750  399286 type.go:168] "Request Body" body=""
	I1206 10:45:54.972837  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:54.973175  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:55.364722  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:55.425632  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:55.425678  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:55.425713  399286 retry.go:31] will retry after 12.507538253s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:55.472809  399286 type.go:168] "Request Body" body=""
	I1206 10:45:55.472887  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:55.473184  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:55.972552  399286 type.go:168] "Request Body" body=""
	I1206 10:45:55.972650  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:55.972997  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:56.472967  399286 type.go:168] "Request Body" body=""
	I1206 10:45:56.473058  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:56.473382  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:56.473432  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:56.973296  399286 type.go:168] "Request Body" body=""
	I1206 10:45:56.973367  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:56.973664  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:57.473479  399286 type.go:168] "Request Body" body=""
	I1206 10:45:57.473548  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:57.473911  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:57.972639  399286 type.go:168] "Request Body" body=""
	I1206 10:45:57.972714  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:57.973013  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:58.472518  399286 type.go:168] "Request Body" body=""
	I1206 10:45:58.472589  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:58.472883  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:58.972593  399286 type.go:168] "Request Body" body=""
	I1206 10:45:58.972667  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:58.973050  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:58.973107  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:59.472758  399286 type.go:168] "Request Body" body=""
	I1206 10:45:59.472833  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:59.473125  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:59.972559  399286 type.go:168] "Request Body" body=""
	I1206 10:45:59.972680  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:59.973026  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:00.472759  399286 type.go:168] "Request Body" body=""
	I1206 10:46:00.472862  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:00.473243  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:00.972535  399286 type.go:168] "Request Body" body=""
	I1206 10:46:00.972611  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:00.972922  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:01.472611  399286 type.go:168] "Request Body" body=""
	I1206 10:46:01.472687  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:01.473449  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:01.473514  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:01.973455  399286 type.go:168] "Request Body" body=""
	I1206 10:46:01.973535  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:01.973907  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:02.472594  399286 type.go:168] "Request Body" body=""
	I1206 10:46:02.472664  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:02.472938  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:02.972637  399286 type.go:168] "Request Body" body=""
	I1206 10:46:02.972721  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:02.973106  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:03.472580  399286 type.go:168] "Request Body" body=""
	I1206 10:46:03.472660  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:03.473047  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:03.972706  399286 type.go:168] "Request Body" body=""
	I1206 10:46:03.972777  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:03.973074  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:03.973125  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:04.472781  399286 type.go:168] "Request Body" body=""
	I1206 10:46:04.472867  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:04.473199  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:04.972938  399286 type.go:168] "Request Body" body=""
	I1206 10:46:04.973022  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:04.973345  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:05.473133  399286 type.go:168] "Request Body" body=""
	I1206 10:46:05.473203  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:05.473463  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:05.973200  399286 type.go:168] "Request Body" body=""
	I1206 10:46:05.973298  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:05.973625  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:05.973682  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:06.472493  399286 type.go:168] "Request Body" body=""
	I1206 10:46:06.472614  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:06.473110  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:06.972852  399286 type.go:168] "Request Body" body=""
	I1206 10:46:06.972929  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:06.973194  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:07.472870  399286 type.go:168] "Request Body" body=""
	I1206 10:46:07.472948  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:07.473250  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:07.933511  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:46:07.973023  399286 type.go:168] "Request Body" body=""
	I1206 10:46:07.973096  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:07.973373  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:07.994514  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:46:07.994569  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:46:07.994603  399286 retry.go:31] will retry after 24.706041433s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:46:08.473166  399286 type.go:168] "Request Body" body=""
	I1206 10:46:08.473240  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:08.473542  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:08.473592  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:08.973437  399286 type.go:168] "Request Body" body=""
	I1206 10:46:08.973586  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:08.973915  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:09.472565  399286 type.go:168] "Request Body" body=""
	I1206 10:46:09.472644  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:09.472997  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:09.972519  399286 type.go:168] "Request Body" body=""
	I1206 10:46:09.972596  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:09.972890  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:10.472617  399286 type.go:168] "Request Body" body=""
	I1206 10:46:10.472695  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:10.473054  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:10.972589  399286 type.go:168] "Request Body" body=""
	I1206 10:46:10.972671  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:10.973007  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:10.973060  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:11.472630  399286 type.go:168] "Request Body" body=""
	I1206 10:46:11.472699  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:11.473093  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:11.972992  399286 type.go:168] "Request Body" body=""
	I1206 10:46:11.973065  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:11.973384  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:12.472990  399286 type.go:168] "Request Body" body=""
	I1206 10:46:12.473133  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:12.473477  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:12.973203  399286 type.go:168] "Request Body" body=""
	I1206 10:46:12.973288  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:12.973547  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:12.973597  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:13.473422  399286 type.go:168] "Request Body" body=""
	I1206 10:46:13.473507  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:13.473830  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:13.972530  399286 type.go:168] "Request Body" body=""
	I1206 10:46:13.972634  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:13.972982  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:14.472697  399286 type.go:168] "Request Body" body=""
	I1206 10:46:14.472780  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:14.473132  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:14.972546  399286 type.go:168] "Request Body" body=""
	I1206 10:46:14.972620  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:14.972954  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:15.472543  399286 type.go:168] "Request Body" body=""
	I1206 10:46:15.472620  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:15.472950  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:15.473077  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:15.972516  399286 type.go:168] "Request Body" body=""
	I1206 10:46:15.972589  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:15.972880  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:16.153289  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:46:16.211194  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:46:16.214959  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:46:16.214991  399286 retry.go:31] will retry after 16.737835039s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:46:16.473494  399286 type.go:168] "Request Body" body=""
	I1206 10:46:16.473573  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:16.473903  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:16.972909  399286 type.go:168] "Request Body" body=""
	I1206 10:46:16.972986  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:16.973336  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:17.473112  399286 type.go:168] "Request Body" body=""
	I1206 10:46:17.473189  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:17.473465  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:17.473508  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:17.973266  399286 type.go:168] "Request Body" body=""
	I1206 10:46:17.973344  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:17.973710  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:18.472499  399286 type.go:168] "Request Body" body=""
	I1206 10:46:18.472586  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:18.472953  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:18.972650  399286 type.go:168] "Request Body" body=""
	I1206 10:46:18.972719  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:18.973068  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:19.472565  399286 type.go:168] "Request Body" body=""
	I1206 10:46:19.472638  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:19.472948  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:19.972573  399286 type.go:168] "Request Body" body=""
	I1206 10:46:19.972649  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:19.972985  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:19.973044  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:20.472488  399286 type.go:168] "Request Body" body=""
	I1206 10:46:20.472560  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:20.472892  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:20.972569  399286 type.go:168] "Request Body" body=""
	I1206 10:46:20.972648  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:20.973000  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:21.472659  399286 type.go:168] "Request Body" body=""
	I1206 10:46:21.472741  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:21.473075  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:21.972911  399286 type.go:168] "Request Body" body=""
	I1206 10:46:21.972985  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:21.973292  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:21.973342  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:22.473039  399286 type.go:168] "Request Body" body=""
	I1206 10:46:22.473118  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:22.473451  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:22.973318  399286 type.go:168] "Request Body" body=""
	I1206 10:46:22.973392  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:22.973733  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:23.472438  399286 type.go:168] "Request Body" body=""
	I1206 10:46:23.472509  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:23.472819  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:23.972535  399286 type.go:168] "Request Body" body=""
	I1206 10:46:23.972611  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:23.972929  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:24.472539  399286 type.go:168] "Request Body" body=""
	I1206 10:46:24.472621  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:24.472971  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:24.473042  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:24.972529  399286 type.go:168] "Request Body" body=""
	I1206 10:46:24.972598  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:24.972883  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:25.472559  399286 type.go:168] "Request Body" body=""
	I1206 10:46:25.472685  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:25.473033  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:25.972748  399286 type.go:168] "Request Body" body=""
	I1206 10:46:25.972833  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:25.973182  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:26.472983  399286 type.go:168] "Request Body" body=""
	I1206 10:46:26.473065  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:26.473364  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:26.473432  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:26.973366  399286 type.go:168] "Request Body" body=""
	I1206 10:46:26.973451  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:26.973797  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:27.472534  399286 type.go:168] "Request Body" body=""
	I1206 10:46:27.472617  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:27.472962  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:27.972662  399286 type.go:168] "Request Body" body=""
	I1206 10:46:27.972735  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:27.973174  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:28.472540  399286 type.go:168] "Request Body" body=""
	I1206 10:46:28.472653  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:28.472975  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:28.972588  399286 type.go:168] "Request Body" body=""
	I1206 10:46:28.972684  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:28.973062  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:28.973117  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:29.472612  399286 type.go:168] "Request Body" body=""
	I1206 10:46:29.472691  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:29.473027  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:29.972588  399286 type.go:168] "Request Body" body=""
	I1206 10:46:29.972665  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:29.973045  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:30.472639  399286 type.go:168] "Request Body" body=""
	I1206 10:46:30.472714  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:30.473023  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:30.972504  399286 type.go:168] "Request Body" body=""
	I1206 10:46:30.972575  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:30.972851  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:31.472534  399286 type.go:168] "Request Body" body=""
	I1206 10:46:31.472619  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:31.472983  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:31.473045  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:31.973254  399286 type.go:168] "Request Body" body=""
	I1206 10:46:31.973345  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:31.973713  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:32.472451  399286 type.go:168] "Request Body" body=""
	I1206 10:46:32.472529  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:32.472822  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:32.701368  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:46:32.764470  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:46:32.764520  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:46:32.764620  399286 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 10:46:32.953898  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:46:32.973480  399286 type.go:168] "Request Body" body=""
	I1206 10:46:32.973551  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:32.973819  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:33.013712  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:46:33.017430  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:46:33.017463  399286 retry.go:31] will retry after 30.205234164s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:46:33.472638  399286 type.go:168] "Request Body" body=""
	I1206 10:46:33.472723  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:33.473069  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:33.473124  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:33.972761  399286 type.go:168] "Request Body" body=""
	I1206 10:46:33.972847  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:33.973196  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:34.472595  399286 type.go:168] "Request Body" body=""
	I1206 10:46:34.472673  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:34.473015  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:34.972624  399286 type.go:168] "Request Body" body=""
	I1206 10:46:34.972712  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:34.973072  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:35.472562  399286 type.go:168] "Request Body" body=""
	I1206 10:46:35.472631  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:35.472898  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:35.972594  399286 type.go:168] "Request Body" body=""
	I1206 10:46:35.972691  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:35.973118  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:35.973178  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:36.472537  399286 type.go:168] "Request Body" body=""
	I1206 10:46:36.472612  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:36.472993  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:36.972887  399286 type.go:168] "Request Body" body=""
	I1206 10:46:36.972979  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:36.973257  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:37.472570  399286 type.go:168] "Request Body" body=""
	I1206 10:46:37.472647  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:37.473006  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:37.972720  399286 type.go:168] "Request Body" body=""
	I1206 10:46:37.972795  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:37.973159  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:37.973230  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:38.472827  399286 type.go:168] "Request Body" body=""
	I1206 10:46:38.472914  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:38.473245  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:38.972991  399286 type.go:168] "Request Body" body=""
	I1206 10:46:38.973109  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:38.973430  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:39.473083  399286 type.go:168] "Request Body" body=""
	I1206 10:46:39.473156  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:39.473487  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:39.973120  399286 type.go:168] "Request Body" body=""
	I1206 10:46:39.973195  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:39.973475  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:39.973520  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:40.473349  399286 type.go:168] "Request Body" body=""
	I1206 10:46:40.473426  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:40.473832  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:40.972536  399286 type.go:168] "Request Body" body=""
	I1206 10:46:40.972611  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:40.972967  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:41.472528  399286 type.go:168] "Request Body" body=""
	I1206 10:46:41.472606  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:41.472897  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:41.972839  399286 type.go:168] "Request Body" body=""
	I1206 10:46:41.972921  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:41.973277  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:42.473121  399286 type.go:168] "Request Body" body=""
	I1206 10:46:42.473205  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:42.473535  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:42.473598  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:42.973295  399286 type.go:168] "Request Body" body=""
	I1206 10:46:42.973369  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:42.973633  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:43.474561  399286 type.go:168] "Request Body" body=""
	I1206 10:46:43.474632  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:43.474989  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:43.972751  399286 type.go:168] "Request Body" body=""
	I1206 10:46:43.972830  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:43.973164  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:44.472525  399286 type.go:168] "Request Body" body=""
	I1206 10:46:44.472603  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:44.472924  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:44.972616  399286 type.go:168] "Request Body" body=""
	I1206 10:46:44.972690  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:44.972993  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:44.973041  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:45.472572  399286 type.go:168] "Request Body" body=""
	I1206 10:46:45.472654  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:45.473032  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:45.972746  399286 type.go:168] "Request Body" body=""
	I1206 10:46:45.972818  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:45.973081  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:46.473093  399286 type.go:168] "Request Body" body=""
	I1206 10:46:46.473169  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:46.473502  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:46.972476  399286 type.go:168] "Request Body" body=""
	I1206 10:46:46.972548  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:46.972884  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:47.472504  399286 type.go:168] "Request Body" body=""
	I1206 10:46:47.472574  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:47.472853  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:47.472901  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:47.972549  399286 type.go:168] "Request Body" body=""
	I1206 10:46:47.972622  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:47.972948  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:48.472667  399286 type.go:168] "Request Body" body=""
	I1206 10:46:48.472745  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:48.473110  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:48.972503  399286 type.go:168] "Request Body" body=""
	I1206 10:46:48.972577  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:48.972841  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:49.472552  399286 type.go:168] "Request Body" body=""
	I1206 10:46:49.472628  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:49.472955  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:49.473012  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:49.972575  399286 type.go:168] "Request Body" body=""
	I1206 10:46:49.972653  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:49.972977  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:50.472510  399286 type.go:168] "Request Body" body=""
	I1206 10:46:50.472585  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:50.472943  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:50.972627  399286 type.go:168] "Request Body" body=""
	I1206 10:46:50.972741  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:50.973101  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:51.472815  399286 type.go:168] "Request Body" body=""
	I1206 10:46:51.472914  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:51.473280  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:51.473354  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:51.973315  399286 type.go:168] "Request Body" body=""
	I1206 10:46:51.973390  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:51.973667  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:52.473496  399286 type.go:168] "Request Body" body=""
	I1206 10:46:52.473597  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:52.473928  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:52.972621  399286 type.go:168] "Request Body" body=""
	I1206 10:46:52.972697  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:52.973027  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:53.472511  399286 type.go:168] "Request Body" body=""
	I1206 10:46:53.472581  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:53.472850  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:53.972553  399286 type.go:168] "Request Body" body=""
	I1206 10:46:53.972631  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:53.973006  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:53.973079  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:54.472752  399286 type.go:168] "Request Body" body=""
	I1206 10:46:54.472832  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:54.473199  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:54.972887  399286 type.go:168] "Request Body" body=""
	I1206 10:46:54.972975  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:54.973260  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:55.472573  399286 type.go:168] "Request Body" body=""
	I1206 10:46:55.472649  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:55.473014  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:55.972568  399286 type.go:168] "Request Body" body=""
	I1206 10:46:55.972665  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:55.973053  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:55.973130  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:56.472795  399286 type.go:168] "Request Body" body=""
	I1206 10:46:56.472877  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:56.473146  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:56.972884  399286 type.go:168] "Request Body" body=""
	I1206 10:46:56.972967  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:56.973286  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:57.473158  399286 type.go:168] "Request Body" body=""
	I1206 10:46:57.473263  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:57.473709  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:57.973446  399286 type.go:168] "Request Body" body=""
	I1206 10:46:57.973526  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:57.973793  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:57.973842  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:58.472543  399286 type.go:168] "Request Body" body=""
	I1206 10:46:58.472635  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:58.473049  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:58.972820  399286 type.go:168] "Request Body" body=""
	I1206 10:46:58.972904  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:58.973235  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:59.472499  399286 type.go:168] "Request Body" body=""
	I1206 10:46:59.472588  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:59.472924  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:59.972544  399286 type.go:168] "Request Body" body=""
	I1206 10:46:59.972618  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:59.972934  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:00.472668  399286 type.go:168] "Request Body" body=""
	I1206 10:47:00.472777  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:00.473113  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:00.473167  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:00.972603  399286 type.go:168] "Request Body" body=""
	I1206 10:47:00.972685  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:00.973038  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:01.472631  399286 type.go:168] "Request Body" body=""
	I1206 10:47:01.472707  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:01.473290  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:01.973256  399286 type.go:168] "Request Body" body=""
	I1206 10:47:01.973331  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:01.973589  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:02.473484  399286 type.go:168] "Request Body" body=""
	I1206 10:47:02.473561  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:02.473896  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:02.473948  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:02.972573  399286 type.go:168] "Request Body" body=""
	I1206 10:47:02.972648  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:02.972960  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:03.223490  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:47:03.285940  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:47:03.285995  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:47:03.286078  399286 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 10:47:03.289299  399286 out.go:179] * Enabled addons: 
	I1206 10:47:03.293166  399286 addons.go:530] duration metric: took 1m56.818196786s for enable addons: enabled=[]
	I1206 10:47:03.473269  399286 type.go:168] "Request Body" body=""
	I1206 10:47:03.473338  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:03.473598  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:03.973428  399286 type.go:168] "Request Body" body=""
	I1206 10:47:03.973501  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:03.973827  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:04.472627  399286 type.go:168] "Request Body" body=""
	I1206 10:47:04.472722  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:04.473116  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:04.972512  399286 type.go:168] "Request Body" body=""
	I1206 10:47:04.972582  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:04.972864  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:04.972905  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:05.472854  399286 type.go:168] "Request Body" body=""
	I1206 10:47:05.472960  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:05.473416  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:05.973522  399286 type.go:168] "Request Body" body=""
	I1206 10:47:05.973609  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:05.973972  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:06.472918  399286 type.go:168] "Request Body" body=""
	I1206 10:47:06.472988  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:06.473277  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:06.973211  399286 type.go:168] "Request Body" body=""
	I1206 10:47:06.973295  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:06.973657  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:06.973714  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:07.473520  399286 type.go:168] "Request Body" body=""
	I1206 10:47:07.473603  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:07.473953  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:07.972629  399286 type.go:168] "Request Body" body=""
	I1206 10:47:07.972706  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:07.973057  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:08.472555  399286 type.go:168] "Request Body" body=""
	I1206 10:47:08.472632  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:08.472984  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:08.972724  399286 type.go:168] "Request Body" body=""
	I1206 10:47:08.972820  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:08.973196  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:09.472563  399286 type.go:168] "Request Body" body=""
	I1206 10:47:09.472642  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:09.472956  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:09.473017  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:09.972573  399286 type.go:168] "Request Body" body=""
	I1206 10:47:09.972651  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:09.973014  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:10.472724  399286 type.go:168] "Request Body" body=""
	I1206 10:47:10.472800  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:10.473133  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:10.972508  399286 type.go:168] "Request Body" body=""
	I1206 10:47:10.972579  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:10.972853  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:11.472590  399286 type.go:168] "Request Body" body=""
	I1206 10:47:11.472669  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:11.473026  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:11.473100  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:11.972903  399286 type.go:168] "Request Body" body=""
	I1206 10:47:11.972976  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:11.973268  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:12.472510  399286 type.go:168] "Request Body" body=""
	I1206 10:47:12.472587  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:12.472925  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:12.972540  399286 type.go:168] "Request Body" body=""
	I1206 10:47:12.972620  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:12.972953  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:13.472625  399286 type.go:168] "Request Body" body=""
	I1206 10:47:13.472698  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:13.473024  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:13.972681  399286 type.go:168] "Request Body" body=""
	I1206 10:47:13.972766  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:13.973081  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:13.973132  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:14.472631  399286 type.go:168] "Request Body" body=""
	I1206 10:47:14.472714  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:14.472985  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:14.972530  399286 type.go:168] "Request Body" body=""
	I1206 10:47:14.972629  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:14.972947  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:15.472648  399286 type.go:168] "Request Body" body=""
	I1206 10:47:15.472724  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:15.472994  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:15.972549  399286 type.go:168] "Request Body" body=""
	I1206 10:47:15.972625  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:15.972986  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:16.472737  399286 type.go:168] "Request Body" body=""
	I1206 10:47:16.472818  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:16.473143  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:16.473210  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:16.972875  399286 type.go:168] "Request Body" body=""
	I1206 10:47:16.972953  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:16.973285  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:17.473095  399286 type.go:168] "Request Body" body=""
	I1206 10:47:17.473172  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:17.473522  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:17.973337  399286 type.go:168] "Request Body" body=""
	I1206 10:47:17.973427  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:17.973777  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:18.473404  399286 type.go:168] "Request Body" body=""
	I1206 10:47:18.473476  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:18.473741  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:18.473792  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:18.972507  399286 type.go:168] "Request Body" body=""
	I1206 10:47:18.972607  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:18.972936  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:19.472645  399286 type.go:168] "Request Body" body=""
	I1206 10:47:19.472722  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:19.473093  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:19.972508  399286 type.go:168] "Request Body" body=""
	I1206 10:47:19.972581  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:19.972852  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:20.472581  399286 type.go:168] "Request Body" body=""
	I1206 10:47:20.472666  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:20.472982  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:20.972583  399286 type.go:168] "Request Body" body=""
	I1206 10:47:20.972663  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:20.973010  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:20.973076  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:21.472734  399286 type.go:168] "Request Body" body=""
	I1206 10:47:21.472809  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:21.473085  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:21.972896  399286 type.go:168] "Request Body" body=""
	I1206 10:47:21.972994  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:21.973342  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:22.473130  399286 type.go:168] "Request Body" body=""
	I1206 10:47:22.473211  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:22.473543  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:22.973319  399286 type.go:168] "Request Body" body=""
	I1206 10:47:22.973388  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:22.973646  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:22.973687  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:23.473432  399286 type.go:168] "Request Body" body=""
	I1206 10:47:23.473517  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:23.473913  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:23.972485  399286 type.go:168] "Request Body" body=""
	I1206 10:47:23.972564  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:23.972906  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:24.472560  399286 type.go:168] "Request Body" body=""
	I1206 10:47:24.472635  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:24.472973  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:24.972571  399286 type.go:168] "Request Body" body=""
	I1206 10:47:24.972646  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:24.973006  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:25.472733  399286 type.go:168] "Request Body" body=""
	I1206 10:47:25.472817  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:25.473171  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:25.473229  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:25.972533  399286 type.go:168] "Request Body" body=""
	I1206 10:47:25.972606  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:25.972871  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:26.472886  399286 type.go:168] "Request Body" body=""
	I1206 10:47:26.472960  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:26.473323  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:26.972934  399286 type.go:168] "Request Body" body=""
	I1206 10:47:26.973008  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:26.973356  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:27.473095  399286 type.go:168] "Request Body" body=""
	I1206 10:47:27.473172  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:27.473531  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:27.473599  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:27.973360  399286 type.go:168] "Request Body" body=""
	I1206 10:47:27.973441  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:27.973782  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:28.472538  399286 type.go:168] "Request Body" body=""
	I1206 10:47:28.472619  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:28.472967  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:28.972640  399286 type.go:168] "Request Body" body=""
	I1206 10:47:28.972710  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:28.973005  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:29.472569  399286 type.go:168] "Request Body" body=""
	I1206 10:47:29.472650  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:29.472987  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:29.972580  399286 type.go:168] "Request Body" body=""
	I1206 10:47:29.972672  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:29.973033  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:29.973089  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:30.472734  399286 type.go:168] "Request Body" body=""
	I1206 10:47:30.472805  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:30.473130  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:30.972629  399286 type.go:168] "Request Body" body=""
	I1206 10:47:30.972708  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:30.973010  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:31.472578  399286 type.go:168] "Request Body" body=""
	I1206 10:47:31.472654  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:31.472987  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:31.972822  399286 type.go:168] "Request Body" body=""
	I1206 10:47:31.972897  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:31.973160  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:31.973200  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:32.472914  399286 type.go:168] "Request Body" body=""
	I1206 10:47:32.473004  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:32.473347  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:32.973159  399286 type.go:168] "Request Body" body=""
	I1206 10:47:32.973233  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:32.973581  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:33.473360  399286 type.go:168] "Request Body" body=""
	I1206 10:47:33.473433  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:33.473718  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:33.973498  399286 type.go:168] "Request Body" body=""
	I1206 10:47:33.973577  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:33.973949  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:33.974029  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:34.472526  399286 type.go:168] "Request Body" body=""
	I1206 10:47:34.472608  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:34.472947  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:34.972533  399286 type.go:168] "Request Body" body=""
	I1206 10:47:34.972628  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:34.972989  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:35.472565  399286 type.go:168] "Request Body" body=""
	I1206 10:47:35.472646  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:35.473023  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:35.972740  399286 type.go:168] "Request Body" body=""
	I1206 10:47:35.972818  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:35.973158  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:36.473170  399286 type.go:168] "Request Body" body=""
	I1206 10:47:36.473254  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:36.473528  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:36.473569  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:36.972525  399286 type.go:168] "Request Body" body=""
	I1206 10:47:36.972602  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:36.972938  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:37.472576  399286 type.go:168] "Request Body" body=""
	I1206 10:47:37.472656  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:37.473000  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:37.972555  399286 type.go:168] "Request Body" body=""
	I1206 10:47:37.972627  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:37.972895  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:38.472579  399286 type.go:168] "Request Body" body=""
	I1206 10:47:38.472663  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:38.473008  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:38.972569  399286 type.go:168] "Request Body" body=""
	I1206 10:47:38.972649  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:38.973012  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:38.973070  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:39.472526  399286 type.go:168] "Request Body" body=""
	I1206 10:47:39.472602  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:39.472864  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:39.972558  399286 type.go:168] "Request Body" body=""
	I1206 10:47:39.972636  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:39.972965  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:40.472567  399286 type.go:168] "Request Body" body=""
	I1206 10:47:40.472639  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:40.472972  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:40.972512  399286 type.go:168] "Request Body" body=""
	I1206 10:47:40.972588  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:40.972883  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:41.472541  399286 type.go:168] "Request Body" body=""
	I1206 10:47:41.472626  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:41.472980  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:41.473038  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:41.972863  399286 type.go:168] "Request Body" body=""
	I1206 10:47:41.972939  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:41.973307  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:42.472600  399286 type.go:168] "Request Body" body=""
	I1206 10:47:42.472680  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:42.472974  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:42.972681  399286 type.go:168] "Request Body" body=""
	I1206 10:47:42.972759  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:42.973100  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:43.472601  399286 type.go:168] "Request Body" body=""
	I1206 10:47:43.472694  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:43.473056  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:43.473116  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:43.972500  399286 type.go:168] "Request Body" body=""
	I1206 10:47:43.972579  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:43.972899  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:44.472587  399286 type.go:168] "Request Body" body=""
	I1206 10:47:44.472675  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:44.473014  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:44.972574  399286 type.go:168] "Request Body" body=""
	I1206 10:47:44.972651  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:44.973031  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:45.472655  399286 type.go:168] "Request Body" body=""
	I1206 10:47:45.472726  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:45.473026  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:45.972718  399286 type.go:168] "Request Body" body=""
	I1206 10:47:45.972800  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:45.973152  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:45.973210  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:46.472509  399286 type.go:168] "Request Body" body=""
	I1206 10:47:46.472600  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:46.472959  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:46.972791  399286 type.go:168] "Request Body" body=""
	I1206 10:47:46.972860  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:46.973128  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:47.472798  399286 type.go:168] "Request Body" body=""
	I1206 10:47:47.472874  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:47.473208  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:47.972568  399286 type.go:168] "Request Body" body=""
	I1206 10:47:47.972648  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:47.972980  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:48.473410  399286 type.go:168] "Request Body" body=""
	I1206 10:47:48.473482  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:48.473747  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:48.473789  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:48.973486  399286 type.go:168] "Request Body" body=""
	I1206 10:47:48.973564  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:48.973890  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:49.472605  399286 type.go:168] "Request Body" body=""
	I1206 10:47:49.472724  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:49.473137  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:49.972518  399286 type.go:168] "Request Body" body=""
	I1206 10:47:49.972592  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:49.972867  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:50.472548  399286 type.go:168] "Request Body" body=""
	I1206 10:47:50.472628  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:50.472960  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:50.972581  399286 type.go:168] "Request Body" body=""
	I1206 10:47:50.972656  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:50.972999  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:50.973058  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:51.472529  399286 type.go:168] "Request Body" body=""
	I1206 10:47:51.472601  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:51.472873  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:51.972855  399286 type.go:168] "Request Body" body=""
	I1206 10:47:51.972934  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:51.973251  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:52.472527  399286 type.go:168] "Request Body" body=""
	I1206 10:47:52.472603  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:52.472925  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:52.972632  399286 type.go:168] "Request Body" body=""
	I1206 10:47:52.972710  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:52.973009  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:53.472551  399286 type.go:168] "Request Body" body=""
	I1206 10:47:53.472635  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:53.473004  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:53.473083  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:53.972778  399286 type.go:168] "Request Body" body=""
	I1206 10:47:53.972868  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:53.973278  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:54.472595  399286 type.go:168] "Request Body" body=""
	I1206 10:47:54.472680  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:54.473008  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:54.972544  399286 type.go:168] "Request Body" body=""
	I1206 10:47:54.972624  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:54.972997  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:55.472544  399286 type.go:168] "Request Body" body=""
	I1206 10:47:55.472633  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:55.472967  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:55.972686  399286 type.go:168] "Request Body" body=""
	I1206 10:47:55.972759  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:55.973084  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:55.973129  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:56.472527  399286 type.go:168] "Request Body" body=""
	I1206 10:47:56.472600  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:56.472935  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:56.972607  399286 type.go:168] "Request Body" body=""
	I1206 10:47:56.972688  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:56.973052  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:57.472495  399286 type.go:168] "Request Body" body=""
	I1206 10:47:57.472571  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:57.472885  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:57.972579  399286 type.go:168] "Request Body" body=""
	I1206 10:47:57.972653  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:57.972989  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:58.472576  399286 type.go:168] "Request Body" body=""
	I1206 10:47:58.472654  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:58.472981  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:58.473038  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:58.972524  399286 type.go:168] "Request Body" body=""
	I1206 10:47:58.972595  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:58.972920  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:59.472623  399286 type.go:168] "Request Body" body=""
	I1206 10:47:59.472702  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:59.473058  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:59.972774  399286 type.go:168] "Request Body" body=""
	I1206 10:47:59.972856  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:59.973198  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:00.472879  399286 type.go:168] "Request Body" body=""
	I1206 10:48:00.472963  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:00.473302  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:00.473350  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:00.973100  399286 type.go:168] "Request Body" body=""
	I1206 10:48:00.973182  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:00.973500  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:01.473348  399286 type.go:168] "Request Body" body=""
	I1206 10:48:01.473426  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:01.473749  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:01.972487  399286 type.go:168] "Request Body" body=""
	I1206 10:48:01.972565  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:01.972839  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:02.472524  399286 type.go:168] "Request Body" body=""
	I1206 10:48:02.472604  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:02.472916  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:02.972566  399286 type.go:168] "Request Body" body=""
	I1206 10:48:02.972640  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:02.972945  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:02.972990  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:03.472551  399286 type.go:168] "Request Body" body=""
	I1206 10:48:03.472641  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:03.472970  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:03.972530  399286 type.go:168] "Request Body" body=""
	I1206 10:48:03.972607  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:03.972945  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:04.472651  399286 type.go:168] "Request Body" body=""
	I1206 10:48:04.472730  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:04.473079  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:04.972502  399286 type.go:168] "Request Body" body=""
	I1206 10:48:04.972576  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:04.972860  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:05.472568  399286 type.go:168] "Request Body" body=""
	I1206 10:48:05.472646  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:05.473022  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:05.473077  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:05.972744  399286 type.go:168] "Request Body" body=""
	I1206 10:48:05.972834  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:05.973199  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:06.473241  399286 type.go:168] "Request Body" body=""
	I1206 10:48:06.473315  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:06.473604  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:06.972611  399286 type.go:168] "Request Body" body=""
	I1206 10:48:06.972691  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:06.972992  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:07.472569  399286 type.go:168] "Request Body" body=""
	I1206 10:48:07.472658  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:07.473030  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:07.972580  399286 type.go:168] "Request Body" body=""
	I1206 10:48:07.972659  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:07.972925  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:07.972966  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:08.472573  399286 type.go:168] "Request Body" body=""
	I1206 10:48:08.472665  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:08.472999  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:08.972712  399286 type.go:168] "Request Body" body=""
	I1206 10:48:08.972805  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:08.973106  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:09.472510  399286 type.go:168] "Request Body" body=""
	I1206 10:48:09.472584  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:09.472910  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:09.972623  399286 type.go:168] "Request Body" body=""
	I1206 10:48:09.972697  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:09.973067  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:09.973119  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:10.472815  399286 type.go:168] "Request Body" body=""
	I1206 10:48:10.472886  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:10.473224  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:10.972551  399286 type.go:168] "Request Body" body=""
	I1206 10:48:10.972627  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:10.972947  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:11.472597  399286 type.go:168] "Request Body" body=""
	I1206 10:48:11.472675  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:11.473029  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:11.972904  399286 type.go:168] "Request Body" body=""
	I1206 10:48:11.972981  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:11.973328  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:11.973383  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:12.472840  399286 type.go:168] "Request Body" body=""
	I1206 10:48:12.472917  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:12.473212  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:12.972545  399286 type.go:168] "Request Body" body=""
	I1206 10:48:12.972620  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:12.972959  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:13.472663  399286 type.go:168] "Request Body" body=""
	I1206 10:48:13.472740  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:13.473115  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:13.972809  399286 type.go:168] "Request Body" body=""
	I1206 10:48:13.972882  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:13.973148  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:14.472560  399286 type.go:168] "Request Body" body=""
	I1206 10:48:14.472633  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:14.472974  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:14.473026  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:14.972547  399286 type.go:168] "Request Body" body=""
	I1206 10:48:14.972633  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:14.972981  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:15.472494  399286 type.go:168] "Request Body" body=""
	I1206 10:48:15.472572  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:15.472888  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:15.972557  399286 type.go:168] "Request Body" body=""
	I1206 10:48:15.972632  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:15.973009  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:16.472796  399286 type.go:168] "Request Body" body=""
	I1206 10:48:16.472875  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:16.473235  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:16.473293  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:16.972964  399286 type.go:168] "Request Body" body=""
	I1206 10:48:16.973036  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:16.973307  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:17.473074  399286 type.go:168] "Request Body" body=""
	I1206 10:48:17.473147  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:17.473485  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:17.973295  399286 type.go:168] "Request Body" body=""
	I1206 10:48:17.973378  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:17.973725  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:18.473438  399286 type.go:168] "Request Body" body=""
	I1206 10:48:18.473505  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:18.473841  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:18.473920  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:18.972621  399286 type.go:168] "Request Body" body=""
	I1206 10:48:18.972695  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:18.973065  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:19.472598  399286 type.go:168] "Request Body" body=""
	I1206 10:48:19.472705  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:19.473114  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:19.972515  399286 type.go:168] "Request Body" body=""
	I1206 10:48:19.972585  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:19.972856  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:20.472548  399286 type.go:168] "Request Body" body=""
	I1206 10:48:20.472625  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:20.472958  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:20.972576  399286 type.go:168] "Request Body" body=""
	I1206 10:48:20.972660  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:20.973023  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:20.973083  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:21.472615  399286 type.go:168] "Request Body" body=""
	I1206 10:48:21.472686  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:21.472963  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:21.972979  399286 type.go:168] "Request Body" body=""
	I1206 10:48:21.973061  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:21.973404  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:22.473200  399286 type.go:168] "Request Body" body=""
	I1206 10:48:22.473283  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:22.473635  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:22.973360  399286 type.go:168] "Request Body" body=""
	I1206 10:48:22.973441  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:22.973782  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:22.973843  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:23.472499  399286 type.go:168] "Request Body" body=""
	I1206 10:48:23.472580  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:23.472916  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:23.972556  399286 type.go:168] "Request Body" body=""
	I1206 10:48:23.972636  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:23.972975  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:24.472447  399286 type.go:168] "Request Body" body=""
	I1206 10:48:24.472514  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:24.472774  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:24.972471  399286 type.go:168] "Request Body" body=""
	I1206 10:48:24.972545  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:24.972884  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:25.472578  399286 type.go:168] "Request Body" body=""
	I1206 10:48:25.472667  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:25.473021  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:25.473076  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:25.972593  399286 type.go:168] "Request Body" body=""
	I1206 10:48:25.972669  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:25.972945  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:26.472488  399286 type.go:168] "Request Body" body=""
	I1206 10:48:26.472562  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:26.472906  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:26.972576  399286 type.go:168] "Request Body" body=""
	I1206 10:48:26.972660  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:26.973014  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:27.472549  399286 type.go:168] "Request Body" body=""
	I1206 10:48:27.472616  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:27.472894  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:27.972571  399286 type.go:168] "Request Body" body=""
	I1206 10:48:27.972663  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:27.973010  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:27.973066  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:28.472755  399286 type.go:168] "Request Body" body=""
	I1206 10:48:28.472833  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:28.473174  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:28.972507  399286 type.go:168] "Request Body" body=""
	I1206 10:48:28.972584  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:28.972913  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:29.472601  399286 type.go:168] "Request Body" body=""
	I1206 10:48:29.472680  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:29.473059  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:29.972567  399286 type.go:168] "Request Body" body=""
	I1206 10:48:29.972642  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:29.972943  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:30.472524  399286 type.go:168] "Request Body" body=""
	I1206 10:48:30.472616  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:30.472907  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:30.472969  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:30.972572  399286 type.go:168] "Request Body" body=""
	I1206 10:48:30.972656  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:30.973011  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:31.472727  399286 type.go:168] "Request Body" body=""
	I1206 10:48:31.472811  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:31.473170  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:31.972852  399286 type.go:168] "Request Body" body=""
	I1206 10:48:31.972934  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:31.973257  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:32.472954  399286 type.go:168] "Request Body" body=""
	I1206 10:48:32.473032  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:32.473401  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:32.473457  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:32.973252  399286 type.go:168] "Request Body" body=""
	I1206 10:48:32.973327  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:32.973655  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:33.473425  399286 type.go:168] "Request Body" body=""
	I1206 10:48:33.473493  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:33.473760  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:33.972474  399286 type.go:168] "Request Body" body=""
	I1206 10:48:33.972572  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:33.972878  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:34.472627  399286 type.go:168] "Request Body" body=""
	I1206 10:48:34.472747  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:34.473114  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:34.972774  399286 type.go:168] "Request Body" body=""
	I1206 10:48:34.972854  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:34.973228  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:34.973282  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:35.472597  399286 type.go:168] "Request Body" body=""
	I1206 10:48:35.472674  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:35.473044  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:35.972740  399286 type.go:168] "Request Body" body=""
	I1206 10:48:35.972817  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:35.973175  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:36.473166  399286 type.go:168] "Request Body" body=""
	I1206 10:48:36.473240  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:36.473506  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:36.972504  399286 type.go:168] "Request Body" body=""
	I1206 10:48:36.972595  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:36.972967  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:37.472704  399286 type.go:168] "Request Body" body=""
	I1206 10:48:37.472781  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:37.473151  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:37.473210  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:37.972584  399286 type.go:168] "Request Body" body=""
	I1206 10:48:37.972659  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:37.972980  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:38.472674  399286 type.go:168] "Request Body" body=""
	I1206 10:48:38.472751  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:38.473096  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:38.972809  399286 type.go:168] "Request Body" body=""
	I1206 10:48:38.972894  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:38.973234  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:39.472512  399286 type.go:168] "Request Body" body=""
	I1206 10:48:39.472591  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:39.472888  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:39.972573  399286 type.go:168] "Request Body" body=""
	I1206 10:48:39.972654  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:39.973005  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:39.973062  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:40.472729  399286 type.go:168] "Request Body" body=""
	I1206 10:48:40.472807  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:40.473118  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:40.972789  399286 type.go:168] "Request Body" body=""
	I1206 10:48:40.972865  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:40.973175  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:41.472555  399286 type.go:168] "Request Body" body=""
	I1206 10:48:41.472632  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:41.473019  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:41.972805  399286 type.go:168] "Request Body" body=""
	I1206 10:48:41.972889  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:41.973231  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:41.973283  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:42.472645  399286 type.go:168] "Request Body" body=""
	I1206 10:48:42.472724  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:42.473014  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:42.972545  399286 type.go:168] "Request Body" body=""
	I1206 10:48:42.972620  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:42.972971  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:43.472630  399286 type.go:168] "Request Body" body=""
	I1206 10:48:43.472707  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:43.473070  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:43.972757  399286 type.go:168] "Request Body" body=""
	I1206 10:48:43.972837  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:43.973118  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:44.472549  399286 type.go:168] "Request Body" body=""
	I1206 10:48:44.472623  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:44.472965  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:44.473024  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:44.972697  399286 type.go:168] "Request Body" body=""
	I1206 10:48:44.972778  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:44.973152  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:45.472603  399286 type.go:168] "Request Body" body=""
	I1206 10:48:45.472680  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:45.472954  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:45.972650  399286 type.go:168] "Request Body" body=""
	I1206 10:48:45.972731  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:45.973088  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:46.472844  399286 type.go:168] "Request Body" body=""
	I1206 10:48:46.472930  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:46.473240  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:46.473286  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:46.972876  399286 type.go:168] "Request Body" body=""
	I1206 10:48:46.972956  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:46.973229  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:47.472914  399286 type.go:168] "Request Body" body=""
	I1206 10:48:47.472997  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:47.473351  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:47.973159  399286 type.go:168] "Request Body" body=""
	I1206 10:48:47.973238  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:47.973572  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:48.473300  399286 type.go:168] "Request Body" body=""
	I1206 10:48:48.473369  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:48.473631  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:48.473676  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:48.973423  399286 type.go:168] "Request Body" body=""
	I1206 10:48:48.973495  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:48.973838  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:49.472583  399286 type.go:168] "Request Body" body=""
	I1206 10:48:49.472657  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:49.472982  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:49.972501  399286 type.go:168] "Request Body" body=""
	I1206 10:48:49.972577  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:49.972905  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:50.472592  399286 type.go:168] "Request Body" body=""
	I1206 10:48:50.472663  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:50.473004  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:50.972730  399286 type.go:168] "Request Body" body=""
	I1206 10:48:50.972813  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:50.973202  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:50.973262  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:51.472848  399286 type.go:168] "Request Body" body=""
	I1206 10:48:51.472917  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:51.473221  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:51.973010  399286 type.go:168] "Request Body" body=""
	I1206 10:48:51.973086  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:51.973413  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:52.473222  399286 type.go:168] "Request Body" body=""
	I1206 10:48:52.473300  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:52.473671  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:52.973437  399286 type.go:168] "Request Body" body=""
	I1206 10:48:52.973506  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:52.973775  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:52.973815  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:53.472497  399286 type.go:168] "Request Body" body=""
	I1206 10:48:53.472572  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:53.472897  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:53.972574  399286 type.go:168] "Request Body" body=""
	I1206 10:48:53.972662  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:53.973051  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:54.472686  399286 type.go:168] "Request Body" body=""
	I1206 10:48:54.472759  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:54.473019  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:54.972701  399286 type.go:168] "Request Body" body=""
	I1206 10:48:54.972840  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:54.973196  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:55.472921  399286 type.go:168] "Request Body" body=""
	I1206 10:48:55.473005  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:55.473348  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:55.473405  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:55.973139  399286 type.go:168] "Request Body" body=""
	I1206 10:48:55.973209  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:55.973524  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:56.473496  399286 type.go:168] "Request Body" body=""
	I1206 10:48:56.473586  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:56.473930  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:56.973058  399286 type.go:168] "Request Body" body=""
	I1206 10:48:56.973167  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:56.973523  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:57.473253  399286 type.go:168] "Request Body" body=""
	I1206 10:48:57.473322  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:57.473613  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:57.473655  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:57.973400  399286 type.go:168] "Request Body" body=""
	I1206 10:48:57.973472  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:57.973805  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:58.472543  399286 type.go:168] "Request Body" body=""
	I1206 10:48:58.472624  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:58.472965  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:58.972545  399286 type.go:168] "Request Body" body=""
	I1206 10:48:58.972612  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:58.972871  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:59.472552  399286 type.go:168] "Request Body" body=""
	I1206 10:48:59.472628  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:59.472962  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:59.972576  399286 type.go:168] "Request Body" body=""
	I1206 10:48:59.972655  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:59.973033  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:59.973088  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:00.472755  399286 type.go:168] "Request Body" body=""
	I1206 10:49:00.472825  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:00.473159  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:00.972543  399286 type.go:168] "Request Body" body=""
	I1206 10:49:00.972623  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:00.972982  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:01.472701  399286 type.go:168] "Request Body" body=""
	I1206 10:49:01.472779  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:01.473107  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:01.972890  399286 type.go:168] "Request Body" body=""
	I1206 10:49:01.972966  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:01.973305  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:01.973365  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:02.473122  399286 type.go:168] "Request Body" body=""
	I1206 10:49:02.473195  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:02.473527  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:02.973299  399286 type.go:168] "Request Body" body=""
	I1206 10:49:02.973372  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:02.973717  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:03.473484  399286 type.go:168] "Request Body" body=""
	I1206 10:49:03.473561  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:03.473909  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:03.972610  399286 type.go:168] "Request Body" body=""
	I1206 10:49:03.972692  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:03.972999  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:04.472572  399286 type.go:168] "Request Body" body=""
	I1206 10:49:04.472655  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:04.473009  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:04.473069  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:04.972630  399286 type.go:168] "Request Body" body=""
	I1206 10:49:04.972705  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:04.973016  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:05.472726  399286 type.go:168] "Request Body" body=""
	I1206 10:49:05.472816  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:05.473184  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:05.972911  399286 type.go:168] "Request Body" body=""
	I1206 10:49:05.972991  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:05.973382  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:06.473278  399286 type.go:168] "Request Body" body=""
	I1206 10:49:06.473354  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:06.473638  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:06.473679  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:06.972552  399286 type.go:168] "Request Body" body=""
	I1206 10:49:06.972644  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:06.972984  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:07.472897  399286 type.go:168] "Request Body" body=""
	I1206 10:49:07.472974  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:07.473313  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:07.973068  399286 type.go:168] "Request Body" body=""
	I1206 10:49:07.973145  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:07.973511  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:08.473278  399286 type.go:168] "Request Body" body=""
	I1206 10:49:08.473354  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:08.473693  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:08.473753  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:08.972455  399286 type.go:168] "Request Body" body=""
	I1206 10:49:08.972536  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:08.972879  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:09.472560  399286 type.go:168] "Request Body" body=""
	I1206 10:49:09.472627  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:09.472882  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:09.972560  399286 type.go:168] "Request Body" body=""
	I1206 10:49:09.972633  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:09.972993  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:10.472707  399286 type.go:168] "Request Body" body=""
	I1206 10:49:10.472787  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:10.473125  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:10.972519  399286 type.go:168] "Request Body" body=""
	I1206 10:49:10.972593  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:10.972923  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:10.972987  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:11.472665  399286 type.go:168] "Request Body" body=""
	I1206 10:49:11.472741  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:11.473101  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:11.972857  399286 type.go:168] "Request Body" body=""
	I1206 10:49:11.972931  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:11.973257  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:12.472588  399286 type.go:168] "Request Body" body=""
	I1206 10:49:12.472664  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:12.472989  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:12.972552  399286 type.go:168] "Request Body" body=""
	I1206 10:49:12.972628  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:12.972971  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:12.973026  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:13.472582  399286 type.go:168] "Request Body" body=""
	I1206 10:49:13.472659  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:13.473010  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:13.972527  399286 type.go:168] "Request Body" body=""
	I1206 10:49:13.972603  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:13.972984  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:14.472669  399286 type.go:168] "Request Body" body=""
	I1206 10:49:14.472750  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:14.473111  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:14.972837  399286 type.go:168] "Request Body" body=""
	I1206 10:49:14.972917  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:14.973262  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:14.973321  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:15.472591  399286 type.go:168] "Request Body" body=""
	I1206 10:49:15.472663  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:15.472956  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:15.972563  399286 type.go:168] "Request Body" body=""
	I1206 10:49:15.972639  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:15.972991  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:16.472531  399286 type.go:168] "Request Body" body=""
	I1206 10:49:16.472616  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:16.472996  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:16.972771  399286 type.go:168] "Request Body" body=""
	I1206 10:49:16.972841  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:16.973118  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:17.472567  399286 type.go:168] "Request Body" body=""
	I1206 10:49:17.472648  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:17.472996  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:17.473050  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:17.972717  399286 type.go:168] "Request Body" body=""
	I1206 10:49:17.972792  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:17.973099  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:18.472513  399286 type.go:168] "Request Body" body=""
	I1206 10:49:18.472587  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:18.472880  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:18.972548  399286 type.go:168] "Request Body" body=""
	I1206 10:49:18.972624  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:18.972945  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:19.472558  399286 type.go:168] "Request Body" body=""
	I1206 10:49:19.472639  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:19.472985  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:19.972653  399286 type.go:168] "Request Body" body=""
	I1206 10:49:19.972729  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:19.973082  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:19.973145  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:20.472552  399286 type.go:168] "Request Body" body=""
	I1206 10:49:20.472627  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:20.472962  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:20.972548  399286 type.go:168] "Request Body" body=""
	I1206 10:49:20.972633  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:20.972967  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:21.472516  399286 type.go:168] "Request Body" body=""
	I1206 10:49:21.472590  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:21.472909  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:21.972841  399286 type.go:168] "Request Body" body=""
	I1206 10:49:21.972916  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:21.973264  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:21.973322  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:22.473122  399286 type.go:168] "Request Body" body=""
	I1206 10:49:22.473197  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:22.473559  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:22.973364  399286 type.go:168] "Request Body" body=""
	I1206 10:49:22.973440  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:22.973787  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:23.472556  399286 type.go:168] "Request Body" body=""
	I1206 10:49:23.472643  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:23.472989  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:23.972688  399286 type.go:168] "Request Body" body=""
	I1206 10:49:23.972770  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:23.973107  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:24.472802  399286 type.go:168] "Request Body" body=""
	I1206 10:49:24.472877  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:24.473193  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:24.473243  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:24.972584  399286 type.go:168] "Request Body" body=""
	I1206 10:49:24.972665  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:24.973023  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:25.472744  399286 type.go:168] "Request Body" body=""
	I1206 10:49:25.472827  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:25.473187  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:25.972875  399286 type.go:168] "Request Body" body=""
	I1206 10:49:25.972942  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:25.973212  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:26.473315  399286 type.go:168] "Request Body" body=""
	I1206 10:49:26.473401  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:26.473746  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:26.473798  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:26.972480  399286 type.go:168] "Request Body" body=""
	I1206 10:49:26.972564  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:26.972910  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:27.472459  399286 type.go:168] "Request Body" body=""
	I1206 10:49:27.472532  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:27.472791  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:27.972488  399286 type.go:168] "Request Body" body=""
	I1206 10:49:27.972566  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:27.972886  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:28.472548  399286 type.go:168] "Request Body" body=""
	I1206 10:49:28.472623  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:28.472959  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:28.972639  399286 type.go:168] "Request Body" body=""
	I1206 10:49:28.972711  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:28.973000  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:28.973048  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:29.472558  399286 type.go:168] "Request Body" body=""
	I1206 10:49:29.472637  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:29.472984  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:29.972552  399286 type.go:168] "Request Body" body=""
	I1206 10:49:29.972639  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:29.972980  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:30.472653  399286 type.go:168] "Request Body" body=""
	I1206 10:49:30.472729  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:30.473004  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:30.972581  399286 type.go:168] "Request Body" body=""
	I1206 10:49:30.972663  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:30.972997  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:31.472551  399286 type.go:168] "Request Body" body=""
	I1206 10:49:31.472633  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:31.472995  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:31.473053  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:31.972765  399286 type.go:168] "Request Body" body=""
	I1206 10:49:31.972832  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:31.973098  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:32.472547  399286 type.go:168] "Request Body" body=""
	I1206 10:49:32.472631  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:32.473016  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:32.972568  399286 type.go:168] "Request Body" body=""
	I1206 10:49:32.972645  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:32.972982  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:33.472517  399286 type.go:168] "Request Body" body=""
	I1206 10:49:33.472591  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:33.472911  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:33.972503  399286 type.go:168] "Request Body" body=""
	I1206 10:49:33.972576  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:33.972901  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:33.972964  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:34.472657  399286 type.go:168] "Request Body" body=""
	I1206 10:49:34.472734  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:34.473129  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:34.972818  399286 type.go:168] "Request Body" body=""
	I1206 10:49:34.972889  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:34.973175  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:35.472878  399286 type.go:168] "Request Body" body=""
	I1206 10:49:35.472955  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:35.473329  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:35.973094  399286 type.go:168] "Request Body" body=""
	I1206 10:49:35.973174  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:35.973494  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:35.973549  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:36.472432  399286 type.go:168] "Request Body" body=""
	I1206 10:49:36.472505  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:36.472781  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:36.972828  399286 type.go:168] "Request Body" body=""
	I1206 10:49:36.972905  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:36.973252  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:37.472563  399286 type.go:168] "Request Body" body=""
	I1206 10:49:37.472637  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:37.472994  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:37.972683  399286 type.go:168] "Request Body" body=""
	I1206 10:49:37.972763  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:37.973077  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:38.472554  399286 type.go:168] "Request Body" body=""
	I1206 10:49:38.472633  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:38.472969  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:38.473033  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:38.972729  399286 type.go:168] "Request Body" body=""
	I1206 10:49:38.972808  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:38.973142  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:39.472497  399286 type.go:168] "Request Body" body=""
	I1206 10:49:39.472571  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:39.472854  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:39.972565  399286 type.go:168] "Request Body" body=""
	I1206 10:49:39.972639  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:39.972983  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:40.472566  399286 type.go:168] "Request Body" body=""
	I1206 10:49:40.472647  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:40.472966  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:40.972679  399286 type.go:168] "Request Body" body=""
	I1206 10:49:40.972760  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:40.973065  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:40.973121  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:41.472568  399286 type.go:168] "Request Body" body=""
	I1206 10:49:41.472657  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:41.473025  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:41.972922  399286 type.go:168] "Request Body" body=""
	I1206 10:49:41.972998  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:41.973339  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:42.473053  399286 type.go:168] "Request Body" body=""
	I1206 10:49:42.473124  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:42.473408  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:42.973276  399286 type.go:168] "Request Body" body=""
	I1206 10:49:42.973355  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:42.973694  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:42.973752  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:43.473503  399286 type.go:168] "Request Body" body=""
	I1206 10:49:43.473574  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:43.473918  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:43.972599  399286 type.go:168] "Request Body" body=""
	I1206 10:49:43.972670  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:43.973000  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:44.472572  399286 type.go:168] "Request Body" body=""
	I1206 10:49:44.472649  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:44.472982  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:44.972582  399286 type.go:168] "Request Body" body=""
	I1206 10:49:44.972670  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:44.973019  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:45.472513  399286 type.go:168] "Request Body" body=""
	I1206 10:49:45.472583  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:45.472857  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:45.472901  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:45.972615  399286 type.go:168] "Request Body" body=""
	I1206 10:49:45.972695  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:45.973015  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:46.472495  399286 type.go:168] "Request Body" body=""
	I1206 10:49:46.472575  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:46.472931  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:46.972626  399286 type.go:168] "Request Body" body=""
	I1206 10:49:46.974621  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:46.974930  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:47.472626  399286 type.go:168] "Request Body" body=""
	I1206 10:49:47.472726  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:47.473070  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:47.473127  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:47.972571  399286 type.go:168] "Request Body" body=""
	I1206 10:49:47.972667  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:47.972989  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:48.472520  399286 type.go:168] "Request Body" body=""
	I1206 10:49:48.472595  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:48.472920  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:48.972585  399286 type.go:168] "Request Body" body=""
	I1206 10:49:48.972669  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:48.973030  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:49.472592  399286 type.go:168] "Request Body" body=""
	I1206 10:49:49.472671  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:49.473006  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:49.972683  399286 type.go:168] "Request Body" body=""
	I1206 10:49:49.972754  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:49.973062  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:49.973108  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:50.472567  399286 type.go:168] "Request Body" body=""
	I1206 10:49:50.472644  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:50.472979  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:50.972718  399286 type.go:168] "Request Body" body=""
	I1206 10:49:50.972795  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:50.973180  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:51.472470  399286 type.go:168] "Request Body" body=""
	I1206 10:49:51.472554  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:51.472819  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:51.972847  399286 type.go:168] "Request Body" body=""
	I1206 10:49:51.972931  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:51.973379  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:51.973433  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:52.473217  399286 type.go:168] "Request Body" body=""
	I1206 10:49:52.473304  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:52.473657  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:52.973426  399286 type.go:168] "Request Body" body=""
	I1206 10:49:52.973497  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:52.973879  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:53.472575  399286 type.go:168] "Request Body" body=""
	I1206 10:49:53.472654  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:53.473006  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:53.972732  399286 type.go:168] "Request Body" body=""
	I1206 10:49:53.972810  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:53.973150  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:54.472486  399286 type.go:168] "Request Body" body=""
	I1206 10:49:54.472557  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:54.472823  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:54.472866  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:54.972537  399286 type.go:168] "Request Body" body=""
	I1206 10:49:54.972613  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:54.972990  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:55.472700  399286 type.go:168] "Request Body" body=""
	I1206 10:49:55.472773  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:55.473120  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:55.972590  399286 type.go:168] "Request Body" body=""
	I1206 10:49:55.972662  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:55.972932  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:56.472845  399286 type.go:168] "Request Body" body=""
	I1206 10:49:56.472928  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:56.473307  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:56.473367  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:56.973002  399286 type.go:168] "Request Body" body=""
	I1206 10:49:56.973079  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:56.973419  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:57.473158  399286 type.go:168] "Request Body" body=""
	I1206 10:49:57.473232  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:57.473497  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:57.973292  399286 type.go:168] "Request Body" body=""
	I1206 10:49:57.973367  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:57.973704  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:58.473494  399286 type.go:168] "Request Body" body=""
	I1206 10:49:58.473568  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:58.473902  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:58.473963  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:58.972557  399286 type.go:168] "Request Body" body=""
	I1206 10:49:58.972629  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:58.972908  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:59.472542  399286 type.go:168] "Request Body" body=""
	I1206 10:49:59.472622  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:59.472961  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:59.972698  399286 type.go:168] "Request Body" body=""
	I1206 10:49:59.972792  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:59.973143  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:00.475191  399286 type.go:168] "Request Body" body=""
	I1206 10:50:00.475305  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:00.475740  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:00.475791  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:00.972513  399286 type.go:168] "Request Body" body=""
	I1206 10:50:00.972593  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:00.972946  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:01.472607  399286 type.go:168] "Request Body" body=""
	I1206 10:50:01.472698  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:01.473000  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:01.972891  399286 type.go:168] "Request Body" body=""
	I1206 10:50:01.972964  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:01.973246  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:02.473133  399286 type.go:168] "Request Body" body=""
	I1206 10:50:02.473209  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:02.473541  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:02.973359  399286 type.go:168] "Request Body" body=""
	I1206 10:50:02.973436  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:02.973744  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:02.973794  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:03.472437  399286 type.go:168] "Request Body" body=""
	I1206 10:50:03.472515  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:03.472786  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:03.972517  399286 type.go:168] "Request Body" body=""
	I1206 10:50:03.972617  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:03.972974  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:04.472690  399286 type.go:168] "Request Body" body=""
	I1206 10:50:04.472770  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:04.473092  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:04.972613  399286 type.go:168] "Request Body" body=""
	I1206 10:50:04.972688  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:04.973025  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:05.472589  399286 type.go:168] "Request Body" body=""
	I1206 10:50:05.472662  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:05.472985  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:05.473041  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:05.972699  399286 type.go:168] "Request Body" body=""
	I1206 10:50:05.972774  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:05.973134  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:06.472872  399286 type.go:168] "Request Body" body=""
	I1206 10:50:06.472950  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:06.473229  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:06.973317  399286 type.go:168] "Request Body" body=""
	I1206 10:50:06.973399  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:06.973730  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:07.472451  399286 type.go:168] "Request Body" body=""
	I1206 10:50:07.472529  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:07.472886  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:07.972570  399286 type.go:168] "Request Body" body=""
	I1206 10:50:07.972651  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:07.972971  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:07.973027  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:08.472555  399286 type.go:168] "Request Body" body=""
	I1206 10:50:08.472629  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:08.472968  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:08.972685  399286 type.go:168] "Request Body" body=""
	I1206 10:50:08.972768  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:08.973126  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:09.472810  399286 type.go:168] "Request Body" body=""
	I1206 10:50:09.472880  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:09.473152  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:09.972581  399286 type.go:168] "Request Body" body=""
	I1206 10:50:09.972655  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:09.973005  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:09.973065  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:10.472745  399286 type.go:168] "Request Body" body=""
	I1206 10:50:10.472826  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:10.473165  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:10.972520  399286 type.go:168] "Request Body" body=""
	I1206 10:50:10.972626  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:10.972896  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:11.472575  399286 type.go:168] "Request Body" body=""
	I1206 10:50:11.472653  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:11.472988  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:11.972973  399286 type.go:168] "Request Body" body=""
	I1206 10:50:11.973057  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:11.973408  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:11.973457  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:12.473192  399286 type.go:168] "Request Body" body=""
	I1206 10:50:12.473263  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:12.473529  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:12.973277  399286 type.go:168] "Request Body" body=""
	I1206 10:50:12.973358  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:12.973714  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:13.473443  399286 type.go:168] "Request Body" body=""
	I1206 10:50:13.473531  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:13.473882  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:13.972569  399286 type.go:168] "Request Body" body=""
	I1206 10:50:13.972666  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:13.972977  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:14.472568  399286 type.go:168] "Request Body" body=""
	I1206 10:50:14.472647  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:14.472975  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:14.473038  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:14.972576  399286 type.go:168] "Request Body" body=""
	I1206 10:50:14.972652  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:14.972994  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:15.472548  399286 type.go:168] "Request Body" body=""
	I1206 10:50:15.472616  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:15.472889  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:15.972575  399286 type.go:168] "Request Body" body=""
	I1206 10:50:15.972652  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:15.972980  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:16.472885  399286 type.go:168] "Request Body" body=""
	I1206 10:50:16.472971  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:16.473326  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:16.473380  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:16.973027  399286 type.go:168] "Request Body" body=""
	I1206 10:50:16.973096  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:16.973374  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:17.473136  399286 type.go:168] "Request Body" body=""
	I1206 10:50:17.473212  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:17.473562  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:17.973242  399286 type.go:168] "Request Body" body=""
	I1206 10:50:17.973317  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:17.973682  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:18.473432  399286 type.go:168] "Request Body" body=""
	I1206 10:50:18.473500  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:18.473770  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:18.473813  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:18.972497  399286 type.go:168] "Request Body" body=""
	I1206 10:50:18.972578  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:18.972916  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:19.472629  399286 type.go:168] "Request Body" body=""
	I1206 10:50:19.472708  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:19.473031  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:19.972542  399286 type.go:168] "Request Body" body=""
	I1206 10:50:19.972615  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:19.972928  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:20.472572  399286 type.go:168] "Request Body" body=""
	I1206 10:50:20.472675  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:20.473027  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:20.972608  399286 type.go:168] "Request Body" body=""
	I1206 10:50:20.972691  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:20.973039  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:20.973093  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:21.472736  399286 type.go:168] "Request Body" body=""
	I1206 10:50:21.472814  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:21.473165  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:21.972862  399286 type.go:168] "Request Body" body=""
	I1206 10:50:21.972940  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:21.973280  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:22.473081  399286 type.go:168] "Request Body" body=""
	I1206 10:50:22.473165  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:22.473518  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:22.973293  399286 type.go:168] "Request Body" body=""
	I1206 10:50:22.973363  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:22.973736  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:22.973785  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:23.472452  399286 type.go:168] "Request Body" body=""
	I1206 10:50:23.472536  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:23.472892  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:23.972617  399286 type.go:168] "Request Body" body=""
	I1206 10:50:23.972696  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:23.973045  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:24.472734  399286 type.go:168] "Request Body" body=""
	I1206 10:50:24.472803  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:24.473087  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:24.972771  399286 type.go:168] "Request Body" body=""
	I1206 10:50:24.972846  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:24.973213  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:25.472766  399286 type.go:168] "Request Body" body=""
	I1206 10:50:25.472842  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:25.473168  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:25.473226  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:25.972592  399286 type.go:168] "Request Body" body=""
	I1206 10:50:25.972661  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:25.972949  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:26.472514  399286 type.go:168] "Request Body" body=""
	I1206 10:50:26.472594  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:26.472931  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:26.972899  399286 type.go:168] "Request Body" body=""
	I1206 10:50:26.972973  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:26.973261  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:27.473424  399286 type.go:168] "Request Body" body=""
	I1206 10:50:27.473500  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:27.473765  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:27.473815  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:27.972509  399286 type.go:168] "Request Body" body=""
	I1206 10:50:27.972592  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:27.972936  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:28.472486  399286 type.go:168] "Request Body" body=""
	I1206 10:50:28.472562  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:28.472923  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:28.972440  399286 type.go:168] "Request Body" body=""
	I1206 10:50:28.972512  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:28.972780  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:29.472546  399286 type.go:168] "Request Body" body=""
	I1206 10:50:29.472624  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:29.472988  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:29.972566  399286 type.go:168] "Request Body" body=""
	I1206 10:50:29.972650  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:29.972976  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:29.973030  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:30.472526  399286 type.go:168] "Request Body" body=""
	I1206 10:50:30.472595  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:30.472867  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:30.972538  399286 type.go:168] "Request Body" body=""
	I1206 10:50:30.972619  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:30.972967  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:31.472532  399286 type.go:168] "Request Body" body=""
	I1206 10:50:31.472614  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:31.472943  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:31.972822  399286 type.go:168] "Request Body" body=""
	I1206 10:50:31.972898  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:31.973163  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:31.973204  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:32.472846  399286 type.go:168] "Request Body" body=""
	I1206 10:50:32.472938  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:32.473300  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:32.973154  399286 type.go:168] "Request Body" body=""
	I1206 10:50:32.973228  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:32.973551  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:33.473237  399286 type.go:168] "Request Body" body=""
	I1206 10:50:33.473313  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:33.473581  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:33.973393  399286 type.go:168] "Request Body" body=""
	I1206 10:50:33.973465  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:33.973800  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:33.973854  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:34.472558  399286 type.go:168] "Request Body" body=""
	I1206 10:50:34.472637  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:34.472972  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:34.972519  399286 type.go:168] "Request Body" body=""
	I1206 10:50:34.972599  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:34.972924  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:35.472616  399286 type.go:168] "Request Body" body=""
	I1206 10:50:35.472695  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:35.473014  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:35.972579  399286 type.go:168] "Request Body" body=""
	I1206 10:50:35.972655  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:35.973034  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:36.472776  399286 type.go:168] "Request Body" body=""
	I1206 10:50:36.472844  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:36.473149  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:36.473213  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:36.972912  399286 type.go:168] "Request Body" body=""
	I1206 10:50:36.972989  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:36.973334  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:37.473153  399286 type.go:168] "Request Body" body=""
	I1206 10:50:37.473234  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:37.473545  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:37.973320  399286 type.go:168] "Request Body" body=""
	I1206 10:50:37.973389  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:37.973719  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:38.473508  399286 type.go:168] "Request Body" body=""
	I1206 10:50:38.473585  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:38.473917  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:38.473974  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:38.972671  399286 type.go:168] "Request Body" body=""
	I1206 10:50:38.972749  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:38.973130  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:39.472813  399286 type.go:168] "Request Body" body=""
	I1206 10:50:39.472890  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:39.473190  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:39.972557  399286 type.go:168] "Request Body" body=""
	I1206 10:50:39.972649  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:39.972986  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:40.472571  399286 type.go:168] "Request Body" body=""
	I1206 10:50:40.472658  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:40.472975  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:40.972515  399286 type.go:168] "Request Body" body=""
	I1206 10:50:40.972584  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:40.972892  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:40.972940  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:41.472601  399286 type.go:168] "Request Body" body=""
	I1206 10:50:41.472684  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:41.473063  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:41.972896  399286 type.go:168] "Request Body" body=""
	I1206 10:50:41.972981  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:41.973322  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:42.472688  399286 type.go:168] "Request Body" body=""
	I1206 10:50:42.472753  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:42.473021  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:42.972603  399286 type.go:168] "Request Body" body=""
	I1206 10:50:42.972678  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:42.973024  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:42.973077  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:43.472713  399286 type.go:168] "Request Body" body=""
	I1206 10:50:43.472795  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:43.473163  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:43.972566  399286 type.go:168] "Request Body" body=""
	I1206 10:50:43.972641  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:43.972943  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:44.472578  399286 type.go:168] "Request Body" body=""
	I1206 10:50:44.472651  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:44.472949  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:44.972603  399286 type.go:168] "Request Body" body=""
	I1206 10:50:44.972680  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:44.973055  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:44.973112  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:45.472760  399286 type.go:168] "Request Body" body=""
	I1206 10:50:45.472833  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:45.473168  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:45.972582  399286 type.go:168] "Request Body" body=""
	I1206 10:50:45.972658  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:45.972953  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:46.472685  399286 type.go:168] "Request Body" body=""
	I1206 10:50:46.472772  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:46.473240  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:46.972953  399286 type.go:168] "Request Body" body=""
	I1206 10:50:46.973034  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:46.973311  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:46.973368  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:47.473089  399286 type.go:168] "Request Body" body=""
	I1206 10:50:47.473162  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:47.473495  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:47.973337  399286 type.go:168] "Request Body" body=""
	I1206 10:50:47.973414  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:47.973765  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:48.472461  399286 type.go:168] "Request Body" body=""
	I1206 10:50:48.472532  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:48.472800  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:48.972484  399286 type.go:168] "Request Body" body=""
	I1206 10:50:48.972555  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:48.972856  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:49.472591  399286 type.go:168] "Request Body" body=""
	I1206 10:50:49.472674  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:49.472971  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:49.473017  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:49.972486  399286 type.go:168] "Request Body" body=""
	I1206 10:50:49.972555  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:49.972818  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:50.472595  399286 type.go:168] "Request Body" body=""
	I1206 10:50:50.472673  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:50.473012  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:50.972610  399286 type.go:168] "Request Body" body=""
	I1206 10:50:50.972682  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:50.973033  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:51.472648  399286 type.go:168] "Request Body" body=""
	I1206 10:50:51.472722  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:51.473053  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:51.473104  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:51.972927  399286 type.go:168] "Request Body" body=""
	I1206 10:50:51.973006  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:51.973306  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:52.473170  399286 type.go:168] "Request Body" body=""
	I1206 10:50:52.473264  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:52.473614  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:52.973403  399286 type.go:168] "Request Body" body=""
	I1206 10:50:52.973483  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:52.973779  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:53.472504  399286 type.go:168] "Request Body" body=""
	I1206 10:50:53.472613  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:53.472956  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:53.972692  399286 type.go:168] "Request Body" body=""
	I1206 10:50:53.972766  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:53.973130  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:53.973190  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:54.472528  399286 type.go:168] "Request Body" body=""
	I1206 10:50:54.472607  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:54.472878  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:54.972605  399286 type.go:168] "Request Body" body=""
	I1206 10:50:54.972688  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:54.973068  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:55.472818  399286 type.go:168] "Request Body" body=""
	I1206 10:50:55.472895  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:55.473202  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:55.972520  399286 type.go:168] "Request Body" body=""
	I1206 10:50:55.972603  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:55.972935  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:56.472654  399286 type.go:168] "Request Body" body=""
	I1206 10:50:56.472729  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:56.473032  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:56.473084  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:56.972842  399286 type.go:168] "Request Body" body=""
	I1206 10:50:56.972920  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:56.973318  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:57.473075  399286 type.go:168] "Request Body" body=""
	I1206 10:50:57.473143  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:57.473455  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:57.973290  399286 type.go:168] "Request Body" body=""
	I1206 10:50:57.973373  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:57.973726  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:58.472463  399286 type.go:168] "Request Body" body=""
	I1206 10:50:58.472542  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:58.472877  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:58.972566  399286 type.go:168] "Request Body" body=""
	I1206 10:50:58.972641  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:58.972980  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:58.973033  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:59.472570  399286 type.go:168] "Request Body" body=""
	I1206 10:50:59.472643  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:59.472941  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:59.972571  399286 type.go:168] "Request Body" body=""
	I1206 10:50:59.972657  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:59.973000  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:00.472549  399286 type.go:168] "Request Body" body=""
	I1206 10:51:00.472645  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:00.473014  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:00.972576  399286 type.go:168] "Request Body" body=""
	I1206 10:51:00.972652  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:00.972971  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:01.472606  399286 type.go:168] "Request Body" body=""
	I1206 10:51:01.472680  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:01.473017  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:51:01.473078  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:51:01.972762  399286 type.go:168] "Request Body" body=""
	I1206 10:51:01.972832  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:01.973108  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:02.472577  399286 type.go:168] "Request Body" body=""
	I1206 10:51:02.472675  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:02.473037  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:02.972793  399286 type.go:168] "Request Body" body=""
	I1206 10:51:02.972870  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:02.973217  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:03.472909  399286 type.go:168] "Request Body" body=""
	I1206 10:51:03.472990  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:03.473316  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:51:03.473367  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:51:03.973130  399286 type.go:168] "Request Body" body=""
	I1206 10:51:03.973216  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:03.973569  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:04.473254  399286 type.go:168] "Request Body" body=""
	I1206 10:51:04.473335  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:04.473708  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:04.973449  399286 type.go:168] "Request Body" body=""
	I1206 10:51:04.973548  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:04.973831  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:05.472562  399286 type.go:168] "Request Body" body=""
	I1206 10:51:05.472640  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:05.472982  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:05.972586  399286 type.go:168] "Request Body" body=""
	I1206 10:51:05.972670  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:05.973021  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:51:05.973092  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:51:06.472600  399286 type.go:168] "Request Body" body=""
	I1206 10:51:06.472689  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:06.473022  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:06.972909  399286 type.go:168] "Request Body" body=""
	I1206 10:51:06.972998  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:06.973336  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:07.473123  399286 type.go:168] "Request Body" body=""
	I1206 10:51:07.473186  399286 node_ready.go:38] duration metric: took 6m0.000853216s for node "functional-196950" to be "Ready" ...
	I1206 10:51:07.476374  399286 out.go:203] 
	W1206 10:51:07.479349  399286 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1206 10:51:07.479391  399286 out.go:285] * 
	W1206 10:51:07.481554  399286 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:51:07.484691  399286 out.go:203] 
	
	
	==> CRI-O <==
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.12702865Z" level=info msg="Using the internal default seccomp profile"
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.127091436Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.127143753Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.127203191Z" level=info msg="RDT not available in the host system"
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.127272804Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.128176526Z" level=info msg="Conmon does support the --sync option"
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.128207641Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.128227021Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.128935041Z" level=info msg="Conmon does support the --sync option"
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.128966852Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.129131408Z" level=info msg="Updated default CNI network name to "
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.129748276Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oc
i/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_
memory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_d
ir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [c
rio.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.130139541Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.130206373Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.184525954Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.184710276Z" level=info msg="Starting seccomp notifier watcher"
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.184779404Z" level=info msg="Create NRI interface"
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.184893957Z" level=info msg="built-in NRI default validator is disabled"
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.18491479Z" level=info msg="runtime interface created"
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.18492819Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.184937543Z" level=info msg="runtime interface starting up..."
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.184944682Z" level=info msg="starting plugins..."
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.184959082Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 06 10:45:05 functional-196950 crio[5345]: time="2025-12-06T10:45:05.185027768Z" level=info msg="No systemd watchdog enabled"
	Dec 06 10:45:05 functional-196950 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:51:12.237451    8735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:51:12.238278    8735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:51:12.239953    8735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:51:12.240292    8735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:51:12.241829    8735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014752] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503231] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.065820] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.901896] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.944603] kauditd_printk_skb: 39 callbacks suppressed
	[Dec 6 09:04] hrtimer: interrupt took 32230230 ns
	[Dec 6 10:25] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 6 10:26] overlayfs: idmapped layers are currently not supported
	[  +0.066821] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 6 10:32] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:33] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 10:51:12 up  2:33,  0 user,  load average: 0.44, 0.32, 0.87
	Linux functional-196950 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 10:51:09 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:51:10 functional-196950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 813.
	Dec 06 10:51:10 functional-196950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:10 functional-196950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:10 functional-196950 kubelet[8607]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:51:10 functional-196950 kubelet[8607]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:51:10 functional-196950 kubelet[8607]: E1206 10:51:10.283857    8607 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:51:10 functional-196950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:51:10 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:51:10 functional-196950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 814.
	Dec 06 10:51:10 functional-196950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:10 functional-196950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:11 functional-196950 kubelet[8628]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:51:11 functional-196950 kubelet[8628]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:51:11 functional-196950 kubelet[8628]: E1206 10:51:11.036986    8628 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:51:11 functional-196950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:51:11 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:51:11 functional-196950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 815.
	Dec 06 10:51:11 functional-196950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:11 functional-196950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:11 functional-196950 kubelet[8648]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:51:11 functional-196950 kubelet[8648]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:51:11 functional-196950 kubelet[8648]: E1206 10:51:11.804825    8648 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:51:11 functional-196950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:51:11 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-196950 -n functional-196950
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-196950 -n functional-196950: exit status 2 (355.378307ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-196950" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 kubectl -- --context functional-196950 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-196950 kubectl -- --context functional-196950 get pods: exit status 1 (109.44101ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-arm64 -p functional-196950 kubectl -- --context functional-196950 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-196950
helpers_test.go:243: (dbg) docker inspect functional-196950:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1",
	        "Created": "2025-12-06T10:36:45.201779678Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 393848,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T10:36:45.318229053Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/hostname",
	        "HostsPath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/hosts",
	        "LogPath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1-json.log",
	        "Name": "/functional-196950",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-196950:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-196950",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1",
	                "LowerDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1-init/diff:/var/lib/docker/overlay2/5011226d55616c9977b14c1fe617d1302fe59373df05ce8ec6e21b79143a1c57/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-196950",
	                "Source": "/var/lib/docker/volumes/functional-196950/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-196950",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-196950",
	                "name.minikube.sigs.k8s.io": "functional-196950",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9b8f961d55d7529aed7b841f2ac9f818c22ff12b8ad73f2d6bcee22656d9749a",
	            "SandboxKey": "/var/run/docker/netns/9b8f961d55d7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33158"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-196950": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:c1:40:2a:93:47",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a566bfdfd33a868cf61e5b18b36cbd55e9868f24cbb091e055ae606aeb8c6f03",
	                    "EndpointID": "452fe32bde0c42c4c35d700488ae93aeecc6c6a971ac6f1a8a492dbc4b328ed9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-196950",
	                        "d150aac7296d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-196950 -n functional-196950
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-196950 -n functional-196950: exit status 2 (337.035867ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-196950 logs -n 25: (1.083555117s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-205266 image ls --format short --alsologtostderr                                                                                       │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image   │ functional-205266 image ls --format yaml --alsologtostderr                                                                                        │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh     │ functional-205266 ssh pgrep buildkitd                                                                                                             │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │                     │
	│ image   │ functional-205266 image ls --format json --alsologtostderr                                                                                        │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image   │ functional-205266 image build -t localhost/my-image:functional-205266 testdata/build --alsologtostderr                                            │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image   │ functional-205266 image ls --format table --alsologtostderr                                                                                       │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image   │ functional-205266 image ls                                                                                                                        │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ delete  │ -p functional-205266                                                                                                                              │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ start   │ -p functional-196950 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │                     │
	│ start   │ -p functional-196950 --alsologtostderr -v=8                                                                                                       │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:45 UTC │                     │
	│ cache   │ functional-196950 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ cache   │ functional-196950 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ cache   │ functional-196950 cache add registry.k8s.io/pause:latest                                                                                          │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ cache   │ functional-196950 cache add minikube-local-cache-test:functional-196950                                                                           │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ cache   │ functional-196950 cache delete minikube-local-cache-test:functional-196950                                                                        │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ ssh     │ functional-196950 ssh sudo crictl images                                                                                                          │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ ssh     │ functional-196950 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ ssh     │ functional-196950 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │                     │
	│ cache   │ functional-196950 cache reload                                                                                                                    │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ ssh     │ functional-196950 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ kubectl │ functional-196950 kubectl -- --context functional-196950 get pods                                                                                 │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:45:01
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:45:01.787203  399286 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:45:01.787433  399286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:45:01.787467  399286 out.go:374] Setting ErrFile to fd 2...
	I1206 10:45:01.787489  399286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:45:01.787778  399286 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 10:45:01.788186  399286 out.go:368] Setting JSON to false
	I1206 10:45:01.789151  399286 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8853,"bootTime":1765009049,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 10:45:01.789259  399286 start.go:143] virtualization:  
	I1206 10:45:01.792729  399286 out.go:179] * [functional-196950] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:45:01.796494  399286 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 10:45:01.796574  399286 notify.go:221] Checking for updates...
	I1206 10:45:01.802323  399286 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:45:01.805290  399286 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 10:45:01.808768  399286 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-362985/.minikube
	I1206 10:45:01.811515  399286 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:45:01.814379  399286 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:45:01.817672  399286 config.go:182] Loaded profile config "functional-196950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:45:01.817798  399286 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:45:01.851887  399286 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:45:01.852009  399286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:45:01.921321  399286 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:45:01.909571102 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:45:01.921426  399286 docker.go:319] overlay module found
	I1206 10:45:01.926314  399286 out.go:179] * Using the docker driver based on existing profile
	I1206 10:45:01.929149  399286 start.go:309] selected driver: docker
	I1206 10:45:01.929174  399286 start.go:927] validating driver "docker" against &{Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:45:01.929299  399286 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:45:01.929402  399286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:45:02.005684  399286 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:45:01.991905909 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:45:02.006178  399286 cni.go:84] Creating CNI manager for ""
	I1206 10:45:02.006252  399286 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:45:02.006308  399286 start.go:353] cluster config:
	{Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:45:02.012455  399286 out.go:179] * Starting "functional-196950" primary control-plane node in "functional-196950" cluster
	I1206 10:45:02.015293  399286 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 10:45:02.018502  399286 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:45:02.021547  399286 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 10:45:02.021609  399286 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1206 10:45:02.021620  399286 cache.go:65] Caching tarball of preloaded images
	I1206 10:45:02.021746  399286 preload.go:238] Found /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1206 10:45:02.021762  399286 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1206 10:45:02.021883  399286 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/config.json ...
	I1206 10:45:02.022120  399286 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:45:02.058171  399286 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 10:45:02.058196  399286 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 10:45:02.058216  399286 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:45:02.058248  399286 start.go:360] acquireMachinesLock for functional-196950: {Name:mkd2471f275d1d2a438cb4ce89f1d1521a0fb340 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:45:02.058324  399286 start.go:364] duration metric: took 51.241µs to acquireMachinesLock for "functional-196950"
	I1206 10:45:02.058347  399286 start.go:96] Skipping create...Using existing machine configuration
	I1206 10:45:02.058352  399286 fix.go:54] fixHost starting: 
	I1206 10:45:02.058623  399286 cli_runner.go:164] Run: docker container inspect functional-196950 --format={{.State.Status}}
	I1206 10:45:02.075952  399286 fix.go:112] recreateIfNeeded on functional-196950: state=Running err=<nil>
	W1206 10:45:02.075984  399286 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 10:45:02.079219  399286 out.go:252] * Updating the running docker "functional-196950" container ...
	I1206 10:45:02.079261  399286 machine.go:94] provisionDockerMachine start ...
	I1206 10:45:02.079396  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:02.097606  399286 main.go:143] libmachine: Using SSH client type: native
	I1206 10:45:02.097945  399286 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1206 10:45:02.097963  399286 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:45:02.251117  399286 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-196950
	
	I1206 10:45:02.251145  399286 ubuntu.go:182] provisioning hostname "functional-196950"
	I1206 10:45:02.251226  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:02.270896  399286 main.go:143] libmachine: Using SSH client type: native
	I1206 10:45:02.271293  399286 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1206 10:45:02.271357  399286 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-196950 && echo "functional-196950" | sudo tee /etc/hostname
	I1206 10:45:02.434988  399286 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-196950
	
	I1206 10:45:02.435098  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:02.453713  399286 main.go:143] libmachine: Using SSH client type: native
	I1206 10:45:02.454033  399286 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1206 10:45:02.454055  399286 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-196950' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-196950/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-196950' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:45:02.607868  399286 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:45:02.607903  399286 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-362985/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-362985/.minikube}
	I1206 10:45:02.607940  399286 ubuntu.go:190] setting up certificates
	I1206 10:45:02.607949  399286 provision.go:84] configureAuth start
	I1206 10:45:02.608015  399286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-196950
	I1206 10:45:02.626134  399286 provision.go:143] copyHostCerts
	I1206 10:45:02.626186  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem
	I1206 10:45:02.626227  399286 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem, removing ...
	I1206 10:45:02.626247  399286 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem
	I1206 10:45:02.626323  399286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem (1082 bytes)
	I1206 10:45:02.626456  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem
	I1206 10:45:02.626477  399286 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem, removing ...
	I1206 10:45:02.626487  399286 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem
	I1206 10:45:02.626523  399286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem (1123 bytes)
	I1206 10:45:02.626584  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem
	I1206 10:45:02.626607  399286 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem, removing ...
	I1206 10:45:02.626611  399286 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem
	I1206 10:45:02.626634  399286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem (1679 bytes)
	I1206 10:45:02.626683  399286 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem org=jenkins.functional-196950 san=[127.0.0.1 192.168.49.2 functional-196950 localhost minikube]
	I1206 10:45:02.961448  399286 provision.go:177] copyRemoteCerts
	I1206 10:45:02.961531  399286 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:45:02.961575  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:02.978755  399286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:45:03.095893  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1206 10:45:03.095982  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 10:45:03.114611  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1206 10:45:03.114706  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 10:45:03.135133  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1206 10:45:03.135195  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 10:45:03.153562  399286 provision.go:87] duration metric: took 545.588133ms to configureAuth
	I1206 10:45:03.153601  399286 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:45:03.153843  399286 config.go:182] Loaded profile config "functional-196950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:45:03.153992  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:03.171946  399286 main.go:143] libmachine: Using SSH client type: native
	I1206 10:45:03.172256  399286 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1206 10:45:03.172279  399286 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 10:45:03.524489  399286 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 10:45:03.524512  399286 machine.go:97] duration metric: took 1.445242076s to provisionDockerMachine
	I1206 10:45:03.524523  399286 start.go:293] postStartSetup for "functional-196950" (driver="docker")
	I1206 10:45:03.524536  399286 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:45:03.524603  399286 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:45:03.524644  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:03.555449  399286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:45:03.668233  399286 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:45:03.672046  399286 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1206 10:45:03.672068  399286 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1206 10:45:03.672073  399286 command_runner.go:130] > VERSION_ID="12"
	I1206 10:45:03.672078  399286 command_runner.go:130] > VERSION="12 (bookworm)"
	I1206 10:45:03.672084  399286 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1206 10:45:03.672087  399286 command_runner.go:130] > ID=debian
	I1206 10:45:03.672092  399286 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1206 10:45:03.672114  399286 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1206 10:45:03.672130  399286 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1206 10:45:03.672206  399286 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:45:03.672228  399286 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:45:03.672240  399286 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-362985/.minikube/addons for local assets ...
	I1206 10:45:03.672300  399286 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-362985/.minikube/files for local assets ...
	I1206 10:45:03.672390  399286 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem -> 3648552.pem in /etc/ssl/certs
	I1206 10:45:03.672402  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem -> /etc/ssl/certs/3648552.pem
	I1206 10:45:03.672481  399286 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/test/nested/copy/364855/hosts -> hosts in /etc/test/nested/copy/364855
	I1206 10:45:03.672489  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/test/nested/copy/364855/hosts -> /etc/test/nested/copy/364855/hosts
	I1206 10:45:03.672536  399286 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/364855
	I1206 10:45:03.681376  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem --> /etc/ssl/certs/3648552.pem (1708 bytes)
	I1206 10:45:03.700845  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/test/nested/copy/364855/hosts --> /etc/test/nested/copy/364855/hosts (40 bytes)
	I1206 10:45:03.720695  399286 start.go:296] duration metric: took 196.153156ms for postStartSetup
	I1206 10:45:03.720782  399286 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:45:03.720851  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:03.739871  399286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:45:03.844136  399286 command_runner.go:130] > 11%
	I1206 10:45:03.844709  399286 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:45:03.849387  399286 command_runner.go:130] > 174G
	I1206 10:45:03.849978  399286 fix.go:56] duration metric: took 1.791620292s for fixHost
	I1206 10:45:03.850000  399286 start.go:83] releasing machines lock for "functional-196950", held for 1.791664797s
	I1206 10:45:03.850077  399286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-196950
	I1206 10:45:03.867785  399286 ssh_runner.go:195] Run: cat /version.json
	I1206 10:45:03.867838  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:03.868113  399286 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:45:03.868167  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:03.886546  399286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:45:03.911694  399286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:45:03.991370  399286 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764843390-22032", "minikube_version": "v1.37.0", "commit": "d7bfd7d6d80c3eeb1d6cf1c5f081f8642bc1997e"}
	I1206 10:45:03.991537  399286 ssh_runner.go:195] Run: systemctl --version
	I1206 10:45:04.088215  399286 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1206 10:45:04.091250  399286 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1206 10:45:04.091291  399286 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1206 10:45:04.091431  399286 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 10:45:04.130964  399286 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1206 10:45:04.136249  399286 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1206 10:45:04.136293  399286 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:45:04.136352  399286 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:45:04.145113  399286 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 10:45:04.145182  399286 start.go:496] detecting cgroup driver to use...
	I1206 10:45:04.145222  399286 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:45:04.145282  399286 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 10:45:04.161420  399286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 10:45:04.175205  399286 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:45:04.175315  399286 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:45:04.191496  399286 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:45:04.205243  399286 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:45:04.349911  399286 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:45:04.470887  399286 docker.go:234] disabling docker service ...
	I1206 10:45:04.471006  399286 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:45:04.486933  399286 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:45:04.500707  399286 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:45:04.632842  399286 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:45:04.756279  399286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:45:04.770461  399286 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:45:04.785365  399286 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1206 10:45:04.786482  399286 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 10:45:04.786596  399286 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:45:04.796852  399286 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 10:45:04.796980  399286 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:45:04.806654  399286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:45:04.816002  399286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:45:04.825576  399286 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:45:04.834547  399286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:45:04.844889  399286 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:45:04.854032  399286 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:45:04.863103  399286 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:45:04.870297  399286 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1206 10:45:04.871475  399286 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:45:04.879247  399286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:45:04.992959  399286 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 10:45:05.192927  399286 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 10:45:05.193085  399286 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 10:45:05.197937  399286 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1206 10:45:05.197964  399286 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1206 10:45:05.197971  399286 command_runner.go:130] > Device: 0,72	Inode: 1640        Links: 1
	I1206 10:45:05.197987  399286 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 10:45:05.198031  399286 command_runner.go:130] > Access: 2025-12-06 10:45:05.125759427 +0000
	I1206 10:45:05.198049  399286 command_runner.go:130] > Modify: 2025-12-06 10:45:05.125759427 +0000
	I1206 10:45:05.198060  399286 command_runner.go:130] > Change: 2025-12-06 10:45:05.125759427 +0000
	I1206 10:45:05.198063  399286 command_runner.go:130] >  Birth: -
	I1206 10:45:05.198081  399286 start.go:564] Will wait 60s for crictl version
	I1206 10:45:05.198158  399286 ssh_runner.go:195] Run: which crictl
	I1206 10:45:05.202333  399286 command_runner.go:130] > /usr/local/bin/crictl
	I1206 10:45:05.202451  399286 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:45:05.227773  399286 command_runner.go:130] > Version:  0.1.0
	I1206 10:45:05.227855  399286 command_runner.go:130] > RuntimeName:  cri-o
	I1206 10:45:05.227876  399286 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1206 10:45:05.227895  399286 command_runner.go:130] > RuntimeApiVersion:  v1
	I1206 10:45:05.230308  399286 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 10:45:05.230460  399286 ssh_runner.go:195] Run: crio --version
	I1206 10:45:05.261871  399286 command_runner.go:130] > crio version 1.34.3
	I1206 10:45:05.261971  399286 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1206 10:45:05.261992  399286 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1206 10:45:05.262014  399286 command_runner.go:130] >    GitTreeState:   dirty
	I1206 10:45:05.262045  399286 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1206 10:45:05.262062  399286 command_runner.go:130] >    GoVersion:      go1.24.6
	I1206 10:45:05.262083  399286 command_runner.go:130] >    Compiler:       gc
	I1206 10:45:05.262102  399286 command_runner.go:130] >    Platform:       linux/arm64
	I1206 10:45:05.262141  399286 command_runner.go:130] >    Linkmode:       static
	I1206 10:45:05.262176  399286 command_runner.go:130] >    BuildTags:
	I1206 10:45:05.262192  399286 command_runner.go:130] >      static
	I1206 10:45:05.262229  399286 command_runner.go:130] >      netgo
	I1206 10:45:05.262248  399286 command_runner.go:130] >      osusergo
	I1206 10:45:05.262264  399286 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1206 10:45:05.262283  399286 command_runner.go:130] >      seccomp
	I1206 10:45:05.262317  399286 command_runner.go:130] >      apparmor
	I1206 10:45:05.262335  399286 command_runner.go:130] >      selinux
	I1206 10:45:05.262352  399286 command_runner.go:130] >    LDFlags:          unknown
	I1206 10:45:05.262371  399286 command_runner.go:130] >    SeccompEnabled:   true
	I1206 10:45:05.262402  399286 command_runner.go:130] >    AppArmorEnabled:  false
	I1206 10:45:05.263735  399286 ssh_runner.go:195] Run: crio --version
	I1206 10:45:05.292275  399286 command_runner.go:130] > crio version 1.34.3
	I1206 10:45:05.292350  399286 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1206 10:45:05.292370  399286 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1206 10:45:05.292389  399286 command_runner.go:130] >    GitTreeState:   dirty
	I1206 10:45:05.292419  399286 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1206 10:45:05.292445  399286 command_runner.go:130] >    GoVersion:      go1.24.6
	I1206 10:45:05.292464  399286 command_runner.go:130] >    Compiler:       gc
	I1206 10:45:05.292484  399286 command_runner.go:130] >    Platform:       linux/arm64
	I1206 10:45:05.292510  399286 command_runner.go:130] >    Linkmode:       static
	I1206 10:45:05.292529  399286 command_runner.go:130] >    BuildTags:
	I1206 10:45:05.292548  399286 command_runner.go:130] >      static
	I1206 10:45:05.292577  399286 command_runner.go:130] >      netgo
	I1206 10:45:05.292594  399286 command_runner.go:130] >      osusergo
	I1206 10:45:05.292622  399286 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1206 10:45:05.292652  399286 command_runner.go:130] >      seccomp
	I1206 10:45:05.292669  399286 command_runner.go:130] >      apparmor
	I1206 10:45:05.292692  399286 command_runner.go:130] >      selinux
	I1206 10:45:05.292731  399286 command_runner.go:130] >    LDFlags:          unknown
	I1206 10:45:05.292749  399286 command_runner.go:130] >    SeccompEnabled:   true
	I1206 10:45:05.292767  399286 command_runner.go:130] >    AppArmorEnabled:  false
	I1206 10:45:05.300434  399286 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1206 10:45:05.303425  399286 cli_runner.go:164] Run: docker network inspect functional-196950 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:45:05.320718  399286 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 10:45:05.324954  399286 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1206 10:45:05.325142  399286 kubeadm.go:884] updating cluster {Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:45:05.325270  399286 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 10:45:05.325346  399286 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:45:05.356177  399286 command_runner.go:130] > {
	I1206 10:45:05.356195  399286 command_runner.go:130] >   "images":  [
	I1206 10:45:05.356199  399286 command_runner.go:130] >     {
	I1206 10:45:05.356208  399286 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1206 10:45:05.356213  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.356218  399286 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1206 10:45:05.356222  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356226  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.356235  399286 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1206 10:45:05.356243  399286 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1206 10:45:05.356246  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356251  399286 command_runner.go:130] >       "size":  "111333938",
	I1206 10:45:05.356254  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.356259  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.356262  399286 command_runner.go:130] >     },
	I1206 10:45:05.356265  399286 command_runner.go:130] >     {
	I1206 10:45:05.356272  399286 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1206 10:45:05.356285  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.356291  399286 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1206 10:45:05.356294  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356298  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.356307  399286 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1206 10:45:05.356315  399286 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1206 10:45:05.356318  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356322  399286 command_runner.go:130] >       "size":  "29037500",
	I1206 10:45:05.356326  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.356334  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.356337  399286 command_runner.go:130] >     },
	I1206 10:45:05.356340  399286 command_runner.go:130] >     {
	I1206 10:45:05.356346  399286 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1206 10:45:05.356350  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.356355  399286 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1206 10:45:05.356358  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356362  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.356369  399286 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1206 10:45:05.356377  399286 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1206 10:45:05.356380  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356385  399286 command_runner.go:130] >       "size":  "74491780",
	I1206 10:45:05.356389  399286 command_runner.go:130] >       "username":  "nonroot",
	I1206 10:45:05.356393  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.356396  399286 command_runner.go:130] >     },
	I1206 10:45:05.356399  399286 command_runner.go:130] >     {
	I1206 10:45:05.356405  399286 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1206 10:45:05.356409  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.356428  399286 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1206 10:45:05.356433  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356438  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.356446  399286 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1206 10:45:05.356453  399286 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1206 10:45:05.356457  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356465  399286 command_runner.go:130] >       "size":  "60857170",
	I1206 10:45:05.356469  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.356472  399286 command_runner.go:130] >         "value":  "0"
	I1206 10:45:05.356475  399286 command_runner.go:130] >       },
	I1206 10:45:05.356488  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.356492  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.356495  399286 command_runner.go:130] >     },
	I1206 10:45:05.356498  399286 command_runner.go:130] >     {
	I1206 10:45:05.356505  399286 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1206 10:45:05.356508  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.356513  399286 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1206 10:45:05.356516  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356520  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.356528  399286 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1206 10:45:05.356536  399286 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1206 10:45:05.356539  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356543  399286 command_runner.go:130] >       "size":  "84949999",
	I1206 10:45:05.356546  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.356550  399286 command_runner.go:130] >         "value":  "0"
	I1206 10:45:05.356553  399286 command_runner.go:130] >       },
	I1206 10:45:05.356557  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.356561  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.356564  399286 command_runner.go:130] >     },
	I1206 10:45:05.356567  399286 command_runner.go:130] >     {
	I1206 10:45:05.356573  399286 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1206 10:45:05.356577  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.356583  399286 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1206 10:45:05.356586  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356590  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.356598  399286 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1206 10:45:05.356606  399286 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1206 10:45:05.356609  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356617  399286 command_runner.go:130] >       "size":  "72170325",
	I1206 10:45:05.356623  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.356627  399286 command_runner.go:130] >         "value":  "0"
	I1206 10:45:05.356631  399286 command_runner.go:130] >       },
	I1206 10:45:05.356634  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.356638  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.356641  399286 command_runner.go:130] >     },
	I1206 10:45:05.356643  399286 command_runner.go:130] >     {
	I1206 10:45:05.356650  399286 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1206 10:45:05.356654  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.356659  399286 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1206 10:45:05.356662  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356666  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.356674  399286 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1206 10:45:05.356681  399286 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1206 10:45:05.356684  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356688  399286 command_runner.go:130] >       "size":  "74106775",
	I1206 10:45:05.356692  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.356695  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.356698  399286 command_runner.go:130] >     },
	I1206 10:45:05.356701  399286 command_runner.go:130] >     {
	I1206 10:45:05.356708  399286 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1206 10:45:05.356711  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.356716  399286 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1206 10:45:05.356719  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356723  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.356730  399286 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1206 10:45:05.356747  399286 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1206 10:45:05.356751  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356755  399286 command_runner.go:130] >       "size":  "49822549",
	I1206 10:45:05.356759  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.356763  399286 command_runner.go:130] >         "value":  "0"
	I1206 10:45:05.356766  399286 command_runner.go:130] >       },
	I1206 10:45:05.356770  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.356778  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.356781  399286 command_runner.go:130] >     },
	I1206 10:45:05.356784  399286 command_runner.go:130] >     {
	I1206 10:45:05.356790  399286 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1206 10:45:05.356794  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.356798  399286 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1206 10:45:05.356801  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356805  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.356812  399286 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1206 10:45:05.356820  399286 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1206 10:45:05.356823  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356826  399286 command_runner.go:130] >       "size":  "519884",
	I1206 10:45:05.356830  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.356833  399286 command_runner.go:130] >         "value":  "65535"
	I1206 10:45:05.356836  399286 command_runner.go:130] >       },
	I1206 10:45:05.356840  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.356843  399286 command_runner.go:130] >       "pinned":  true
	I1206 10:45:05.356850  399286 command_runner.go:130] >     }
	I1206 10:45:05.356853  399286 command_runner.go:130] >   ]
	I1206 10:45:05.356857  399286 command_runner.go:130] > }
	I1206 10:45:05.358491  399286 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:45:05.358523  399286 crio.go:433] Images already preloaded, skipping extraction
	I1206 10:45:05.358585  399286 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:45:05.381820  399286 command_runner.go:130] > {
	I1206 10:45:05.381840  399286 command_runner.go:130] >   "images":  [
	I1206 10:45:05.381844  399286 command_runner.go:130] >     {
	I1206 10:45:05.381853  399286 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1206 10:45:05.381857  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.381864  399286 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1206 10:45:05.381867  399286 command_runner.go:130] >       ],
	I1206 10:45:05.381871  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.381880  399286 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1206 10:45:05.381888  399286 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1206 10:45:05.381892  399286 command_runner.go:130] >       ],
	I1206 10:45:05.381896  399286 command_runner.go:130] >       "size":  "111333938",
	I1206 10:45:05.381900  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.381909  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.381912  399286 command_runner.go:130] >     },
	I1206 10:45:05.381916  399286 command_runner.go:130] >     {
	I1206 10:45:05.381922  399286 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1206 10:45:05.381926  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.381932  399286 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1206 10:45:05.381935  399286 command_runner.go:130] >       ],
	I1206 10:45:05.381939  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.381947  399286 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1206 10:45:05.381956  399286 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1206 10:45:05.381959  399286 command_runner.go:130] >       ],
	I1206 10:45:05.381963  399286 command_runner.go:130] >       "size":  "29037500",
	I1206 10:45:05.381967  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.381973  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.381977  399286 command_runner.go:130] >     },
	I1206 10:45:05.381980  399286 command_runner.go:130] >     {
	I1206 10:45:05.381987  399286 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1206 10:45:05.381990  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.381999  399286 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1206 10:45:05.382003  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382007  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.382014  399286 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1206 10:45:05.382022  399286 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1206 10:45:05.382025  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382029  399286 command_runner.go:130] >       "size":  "74491780",
	I1206 10:45:05.382033  399286 command_runner.go:130] >       "username":  "nonroot",
	I1206 10:45:05.382037  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.382040  399286 command_runner.go:130] >     },
	I1206 10:45:05.382043  399286 command_runner.go:130] >     {
	I1206 10:45:05.382049  399286 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1206 10:45:05.382053  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.382058  399286 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1206 10:45:05.382063  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382067  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.382074  399286 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1206 10:45:05.382082  399286 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1206 10:45:05.382085  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382089  399286 command_runner.go:130] >       "size":  "60857170",
	I1206 10:45:05.382093  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.382096  399286 command_runner.go:130] >         "value":  "0"
	I1206 10:45:05.382100  399286 command_runner.go:130] >       },
	I1206 10:45:05.382398  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.382411  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.382415  399286 command_runner.go:130] >     },
	I1206 10:45:05.382419  399286 command_runner.go:130] >     {
	I1206 10:45:05.382427  399286 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1206 10:45:05.382437  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.382443  399286 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1206 10:45:05.382446  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382450  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.382463  399286 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1206 10:45:05.382476  399286 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1206 10:45:05.382479  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382484  399286 command_runner.go:130] >       "size":  "84949999",
	I1206 10:45:05.382492  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.382495  399286 command_runner.go:130] >         "value":  "0"
	I1206 10:45:05.382499  399286 command_runner.go:130] >       },
	I1206 10:45:05.382503  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.382507  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.382510  399286 command_runner.go:130] >     },
	I1206 10:45:05.382514  399286 command_runner.go:130] >     {
	I1206 10:45:05.382524  399286 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1206 10:45:05.382528  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.382534  399286 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1206 10:45:05.382541  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382546  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.382555  399286 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1206 10:45:05.382568  399286 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1206 10:45:05.382571  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382575  399286 command_runner.go:130] >       "size":  "72170325",
	I1206 10:45:05.382579  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.382583  399286 command_runner.go:130] >         "value":  "0"
	I1206 10:45:05.382590  399286 command_runner.go:130] >       },
	I1206 10:45:05.382594  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.382597  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.382601  399286 command_runner.go:130] >     },
	I1206 10:45:05.382604  399286 command_runner.go:130] >     {
	I1206 10:45:05.382615  399286 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1206 10:45:05.382618  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.382624  399286 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1206 10:45:05.382627  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382631  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.382643  399286 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1206 10:45:05.382651  399286 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1206 10:45:05.382658  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382666  399286 command_runner.go:130] >       "size":  "74106775",
	I1206 10:45:05.382672  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.382676  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.382679  399286 command_runner.go:130] >     },
	I1206 10:45:05.382682  399286 command_runner.go:130] >     {
	I1206 10:45:05.382693  399286 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1206 10:45:05.382697  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.382702  399286 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1206 10:45:05.382706  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382710  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.382722  399286 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1206 10:45:05.382745  399286 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1206 10:45:05.382753  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382757  399286 command_runner.go:130] >       "size":  "49822549",
	I1206 10:45:05.382761  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.382765  399286 command_runner.go:130] >         "value":  "0"
	I1206 10:45:05.382768  399286 command_runner.go:130] >       },
	I1206 10:45:05.382772  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.382780  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.382783  399286 command_runner.go:130] >     },
	I1206 10:45:05.382786  399286 command_runner.go:130] >     {
	I1206 10:45:05.382793  399286 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1206 10:45:05.382797  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.382805  399286 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1206 10:45:05.382808  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382812  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.382820  399286 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1206 10:45:05.382832  399286 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1206 10:45:05.382835  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382839  399286 command_runner.go:130] >       "size":  "519884",
	I1206 10:45:05.382843  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.382847  399286 command_runner.go:130] >         "value":  "65535"
	I1206 10:45:05.382857  399286 command_runner.go:130] >       },
	I1206 10:45:05.382861  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.382865  399286 command_runner.go:130] >       "pinned":  true
	I1206 10:45:05.382868  399286 command_runner.go:130] >     }
	I1206 10:45:05.382871  399286 command_runner.go:130] >   ]
	I1206 10:45:05.382874  399286 command_runner.go:130] > }
	I1206 10:45:05.396183  399286 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:45:05.396208  399286 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:45:05.396219  399286 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1206 10:45:05.396325  399286 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-196950 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:45:05.396421  399286 ssh_runner.go:195] Run: crio config
	I1206 10:45:05.425462  399286 command_runner.go:130] ! time="2025-12-06T10:45:05.425119459Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1206 10:45:05.425532  399286 command_runner.go:130] ! time="2025-12-06T10:45:05.425157991Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1206 10:45:05.425754  399286 command_runner.go:130] ! time="2025-12-06T10:45:05.425195308Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1206 10:45:05.425797  399286 command_runner.go:130] ! time="2025-12-06T10:45:05.42522017Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1206 10:45:05.425982  399286 command_runner.go:130] ! time="2025-12-06T10:45:05.425299687Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:45:05.426160  399286 command_runner.go:130] ! time="2025-12-06T10:45:05.42561672Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1206 10:45:05.442529  399286 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1206 10:45:05.470811  399286 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1206 10:45:05.470887  399286 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1206 10:45:05.470910  399286 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1206 10:45:05.470925  399286 command_runner.go:130] > #
	I1206 10:45:05.470961  399286 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1206 10:45:05.470990  399286 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1206 10:45:05.471012  399286 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1206 10:45:05.471037  399286 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1206 10:45:05.471066  399286 command_runner.go:130] > # reload'.
	I1206 10:45:05.471089  399286 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1206 10:45:05.471110  399286 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1206 10:45:05.471132  399286 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1206 10:45:05.471165  399286 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1206 10:45:05.471189  399286 command_runner.go:130] > [crio]
	I1206 10:45:05.471211  399286 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1206 10:45:05.471233  399286 command_runner.go:130] > # containers images, in this directory.
	I1206 10:45:05.471266  399286 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1206 10:45:05.471291  399286 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1206 10:45:05.471336  399286 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1206 10:45:05.471369  399286 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1206 10:45:05.471416  399286 command_runner.go:130] > # imagestore = ""
	I1206 10:45:05.471447  399286 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1206 10:45:05.471467  399286 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1206 10:45:05.471498  399286 command_runner.go:130] > # storage_driver = "overlay"
	I1206 10:45:05.471527  399286 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1206 10:45:05.471540  399286 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1206 10:45:05.471544  399286 command_runner.go:130] > # storage_option = [
	I1206 10:45:05.471548  399286 command_runner.go:130] > # ]
	I1206 10:45:05.471554  399286 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1206 10:45:05.471561  399286 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1206 10:45:05.471566  399286 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1206 10:45:05.471572  399286 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1206 10:45:05.471584  399286 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1206 10:45:05.471601  399286 command_runner.go:130] > # always happen on a node reboot
	I1206 10:45:05.471614  399286 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1206 10:45:05.471624  399286 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1206 10:45:05.471631  399286 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1206 10:45:05.471647  399286 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1206 10:45:05.471665  399286 command_runner.go:130] > # version_file_persist = ""
	I1206 10:45:05.471674  399286 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1206 10:45:05.471685  399286 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1206 10:45:05.471689  399286 command_runner.go:130] > # internal_wipe = true
	I1206 10:45:05.471701  399286 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1206 10:45:05.471736  399286 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1206 10:45:05.471747  399286 command_runner.go:130] > # internal_repair = true
	I1206 10:45:05.471753  399286 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1206 10:45:05.471760  399286 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1206 10:45:05.471768  399286 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1206 10:45:05.471774  399286 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1206 10:45:05.471790  399286 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1206 10:45:05.471793  399286 command_runner.go:130] > [crio.api]
	I1206 10:45:05.471799  399286 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1206 10:45:05.471810  399286 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1206 10:45:05.471817  399286 command_runner.go:130] > # IP address on which the stream server will listen.
	I1206 10:45:05.471822  399286 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1206 10:45:05.471829  399286 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1206 10:45:05.471837  399286 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1206 10:45:05.471841  399286 command_runner.go:130] > # stream_port = "0"
	I1206 10:45:05.471852  399286 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1206 10:45:05.471856  399286 command_runner.go:130] > # stream_enable_tls = false
	I1206 10:45:05.471867  399286 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1206 10:45:05.471871  399286 command_runner.go:130] > # stream_idle_timeout = ""
	I1206 10:45:05.471891  399286 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1206 10:45:05.471897  399286 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1206 10:45:05.471905  399286 command_runner.go:130] > # stream_tls_cert = ""
	I1206 10:45:05.471912  399286 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1206 10:45:05.471918  399286 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1206 10:45:05.471922  399286 command_runner.go:130] > # stream_tls_key = ""
	I1206 10:45:05.471928  399286 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1206 10:45:05.471937  399286 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1206 10:45:05.471942  399286 command_runner.go:130] > # automatically pick up the changes.
	I1206 10:45:05.471950  399286 command_runner.go:130] > # stream_tls_ca = ""
	I1206 10:45:05.471981  399286 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1206 10:45:05.471991  399286 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1206 10:45:05.471999  399286 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1206 10:45:05.472004  399286 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1206 10:45:05.472010  399286 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1206 10:45:05.472018  399286 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1206 10:45:05.472022  399286 command_runner.go:130] > [crio.runtime]
	I1206 10:45:05.472029  399286 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1206 10:45:05.472036  399286 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1206 10:45:05.472041  399286 command_runner.go:130] > # "nofile=1024:2048"
	I1206 10:45:05.472057  399286 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1206 10:45:05.472061  399286 command_runner.go:130] > # default_ulimits = [
	I1206 10:45:05.472064  399286 command_runner.go:130] > # ]
	I1206 10:45:05.472070  399286 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1206 10:45:05.472077  399286 command_runner.go:130] > # no_pivot = false
	I1206 10:45:05.472083  399286 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1206 10:45:05.472090  399286 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1206 10:45:05.472095  399286 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1206 10:45:05.472103  399286 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1206 10:45:05.472108  399286 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1206 10:45:05.472117  399286 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1206 10:45:05.472123  399286 command_runner.go:130] > # conmon = ""
	I1206 10:45:05.472127  399286 command_runner.go:130] > # Cgroup setting for conmon
	I1206 10:45:05.472137  399286 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1206 10:45:05.472143  399286 command_runner.go:130] > conmon_cgroup = "pod"
	I1206 10:45:05.472152  399286 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1206 10:45:05.472157  399286 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1206 10:45:05.472164  399286 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1206 10:45:05.472168  399286 command_runner.go:130] > # conmon_env = [
	I1206 10:45:05.472173  399286 command_runner.go:130] > # ]
	I1206 10:45:05.472180  399286 command_runner.go:130] > # Additional environment variables to set for all the
	I1206 10:45:05.472188  399286 command_runner.go:130] > # containers. These are overridden if set in the
	I1206 10:45:05.472198  399286 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1206 10:45:05.472204  399286 command_runner.go:130] > # default_env = [
	I1206 10:45:05.472208  399286 command_runner.go:130] > # ]
	I1206 10:45:05.472213  399286 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1206 10:45:05.472223  399286 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1206 10:45:05.472229  399286 command_runner.go:130] > # selinux = false
	I1206 10:45:05.472236  399286 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1206 10:45:05.472246  399286 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1206 10:45:05.472252  399286 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:45:05.472255  399286 command_runner.go:130] > # seccomp_profile = ""
	I1206 10:45:05.472262  399286 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1206 10:45:05.472270  399286 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:45:05.472274  399286 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1206 10:45:05.472281  399286 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1206 10:45:05.472287  399286 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1206 10:45:05.472295  399286 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1206 10:45:05.472302  399286 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1206 10:45:05.472315  399286 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:45:05.472320  399286 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1206 10:45:05.472326  399286 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1206 10:45:05.472330  399286 command_runner.go:130] > # the cgroup blockio controller.
	I1206 10:45:05.472337  399286 command_runner.go:130] > # blockio_config_file = ""
	I1206 10:45:05.472345  399286 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1206 10:45:05.472353  399286 command_runner.go:130] > # blockio parameters.
	I1206 10:45:05.472357  399286 command_runner.go:130] > # blockio_reload = false
	I1206 10:45:05.472364  399286 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1206 10:45:05.472367  399286 command_runner.go:130] > # irqbalance daemon.
	I1206 10:45:05.472373  399286 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1206 10:45:05.472381  399286 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1206 10:45:05.472391  399286 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1206 10:45:05.472412  399286 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1206 10:45:05.472419  399286 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1206 10:45:05.472428  399286 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1206 10:45:05.472437  399286 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:45:05.472448  399286 command_runner.go:130] > # rdt_config_file = ""
	I1206 10:45:05.472455  399286 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1206 10:45:05.472459  399286 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1206 10:45:05.472465  399286 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1206 10:45:05.472472  399286 command_runner.go:130] > # separate_pull_cgroup = ""
	I1206 10:45:05.472479  399286 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1206 10:45:05.472486  399286 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1206 10:45:05.472498  399286 command_runner.go:130] > # will be added.
	I1206 10:45:05.472503  399286 command_runner.go:130] > # default_capabilities = [
	I1206 10:45:05.472506  399286 command_runner.go:130] > # 	"CHOWN",
	I1206 10:45:05.472510  399286 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1206 10:45:05.472520  399286 command_runner.go:130] > # 	"FSETID",
	I1206 10:45:05.472525  399286 command_runner.go:130] > # 	"FOWNER",
	I1206 10:45:05.472529  399286 command_runner.go:130] > # 	"SETGID",
	I1206 10:45:05.472539  399286 command_runner.go:130] > # 	"SETUID",
	I1206 10:45:05.472558  399286 command_runner.go:130] > # 	"SETPCAP",
	I1206 10:45:05.472573  399286 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1206 10:45:05.472576  399286 command_runner.go:130] > # 	"KILL",
	I1206 10:45:05.472579  399286 command_runner.go:130] > # ]
	I1206 10:45:05.472587  399286 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1206 10:45:05.472602  399286 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1206 10:45:05.472607  399286 command_runner.go:130] > # add_inheritable_capabilities = false
	I1206 10:45:05.472616  399286 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1206 10:45:05.472628  399286 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1206 10:45:05.472632  399286 command_runner.go:130] > default_sysctls = [
	I1206 10:45:05.472637  399286 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1206 10:45:05.472643  399286 command_runner.go:130] > ]
	I1206 10:45:05.472650  399286 command_runner.go:130] > # List of devices on the host that a
	I1206 10:45:05.472660  399286 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1206 10:45:05.472664  399286 command_runner.go:130] > # allowed_devices = [
	I1206 10:45:05.472670  399286 command_runner.go:130] > # 	"/dev/fuse",
	I1206 10:45:05.472674  399286 command_runner.go:130] > # 	"/dev/net/tun",
	I1206 10:45:05.472681  399286 command_runner.go:130] > # ]
	I1206 10:45:05.472689  399286 command_runner.go:130] > # List of additional devices. specified as
	I1206 10:45:05.472697  399286 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1206 10:45:05.472703  399286 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1206 10:45:05.472711  399286 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1206 10:45:05.472716  399286 command_runner.go:130] > # additional_devices = [
	I1206 10:45:05.472722  399286 command_runner.go:130] > # ]
	I1206 10:45:05.472730  399286 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1206 10:45:05.472737  399286 command_runner.go:130] > # cdi_spec_dirs = [
	I1206 10:45:05.472743  399286 command_runner.go:130] > # 	"/etc/cdi",
	I1206 10:45:05.472747  399286 command_runner.go:130] > # 	"/var/run/cdi",
	I1206 10:45:05.472750  399286 command_runner.go:130] > # ]
	I1206 10:45:05.472757  399286 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1206 10:45:05.472766  399286 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1206 10:45:05.472770  399286 command_runner.go:130] > # Defaults to false.
	I1206 10:45:05.472775  399286 command_runner.go:130] > # device_ownership_from_security_context = false
	I1206 10:45:05.472782  399286 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1206 10:45:05.472791  399286 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1206 10:45:05.472795  399286 command_runner.go:130] > # hooks_dir = [
	I1206 10:45:05.472800  399286 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1206 10:45:05.472806  399286 command_runner.go:130] > # ]
	I1206 10:45:05.472813  399286 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1206 10:45:05.472819  399286 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1206 10:45:05.472827  399286 command_runner.go:130] > # its default mounts from the following two files:
	I1206 10:45:05.472830  399286 command_runner.go:130] > #
	I1206 10:45:05.472836  399286 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1206 10:45:05.472845  399286 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1206 10:45:05.472852  399286 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1206 10:45:05.472858  399286 command_runner.go:130] > #
	I1206 10:45:05.472865  399286 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1206 10:45:05.472871  399286 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1206 10:45:05.472878  399286 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1206 10:45:05.472887  399286 command_runner.go:130] > #      only add mounts it finds in this file.
	I1206 10:45:05.472896  399286 command_runner.go:130] > #
	I1206 10:45:05.472902  399286 command_runner.go:130] > # default_mounts_file = ""
	I1206 10:45:05.472910  399286 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1206 10:45:05.472919  399286 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1206 10:45:05.472932  399286 command_runner.go:130] > # pids_limit = -1
	I1206 10:45:05.472938  399286 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1206 10:45:05.472947  399286 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1206 10:45:05.472961  399286 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1206 10:45:05.472979  399286 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1206 10:45:05.472983  399286 command_runner.go:130] > # log_size_max = -1
	I1206 10:45:05.472990  399286 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1206 10:45:05.472997  399286 command_runner.go:130] > # log_to_journald = false
	I1206 10:45:05.473006  399286 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1206 10:45:05.473011  399286 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1206 10:45:05.473016  399286 command_runner.go:130] > # Path to directory for container attach sockets.
	I1206 10:45:05.473024  399286 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1206 10:45:05.473032  399286 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1206 10:45:05.473036  399286 command_runner.go:130] > # bind_mount_prefix = ""
	I1206 10:45:05.473044  399286 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1206 10:45:05.473049  399286 command_runner.go:130] > # read_only = false
	I1206 10:45:05.473063  399286 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1206 10:45:05.473070  399286 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1206 10:45:05.473074  399286 command_runner.go:130] > # live configuration reload.
	I1206 10:45:05.473085  399286 command_runner.go:130] > # log_level = "info"
	I1206 10:45:05.473092  399286 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1206 10:45:05.473097  399286 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:45:05.473101  399286 command_runner.go:130] > # log_filter = ""
	I1206 10:45:05.473110  399286 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1206 10:45:05.473119  399286 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1206 10:45:05.473123  399286 command_runner.go:130] > # separated by comma.
	I1206 10:45:05.473132  399286 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1206 10:45:05.473138  399286 command_runner.go:130] > # uid_mappings = ""
	I1206 10:45:05.473145  399286 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1206 10:45:05.473155  399286 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1206 10:45:05.473162  399286 command_runner.go:130] > # separated by comma.
	I1206 10:45:05.473171  399286 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1206 10:45:05.473178  399286 command_runner.go:130] > # gid_mappings = ""
	I1206 10:45:05.473185  399286 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1206 10:45:05.473197  399286 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1206 10:45:05.473206  399286 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1206 10:45:05.473217  399286 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1206 10:45:05.473223  399286 command_runner.go:130] > # minimum_mappable_uid = -1
	I1206 10:45:05.473230  399286 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1206 10:45:05.473238  399286 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1206 10:45:05.473249  399286 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1206 10:45:05.473260  399286 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1206 10:45:05.473264  399286 command_runner.go:130] > # minimum_mappable_gid = -1
	I1206 10:45:05.473270  399286 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1206 10:45:05.473282  399286 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1206 10:45:05.473287  399286 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1206 10:45:05.473292  399286 command_runner.go:130] > # ctr_stop_timeout = 30
	I1206 10:45:05.473298  399286 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1206 10:45:05.473307  399286 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1206 10:45:05.473312  399286 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1206 10:45:05.473317  399286 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1206 10:45:05.473323  399286 command_runner.go:130] > # drop_infra_ctr = true
	I1206 10:45:05.473330  399286 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1206 10:45:05.473339  399286 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1206 10:45:05.473347  399286 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1206 10:45:05.473351  399286 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1206 10:45:05.473362  399286 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1206 10:45:05.473373  399286 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1206 10:45:05.473378  399286 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1206 10:45:05.473383  399286 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1206 10:45:05.473389  399286 command_runner.go:130] > # shared_cpuset = ""
	I1206 10:45:05.473397  399286 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1206 10:45:05.473408  399286 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1206 10:45:05.473415  399286 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1206 10:45:05.473423  399286 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1206 10:45:05.473429  399286 command_runner.go:130] > # pinns_path = ""
	I1206 10:45:05.473435  399286 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1206 10:45:05.473442  399286 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1206 10:45:05.473446  399286 command_runner.go:130] > # enable_criu_support = true
	I1206 10:45:05.473458  399286 command_runner.go:130] > # Enable/disable the generation of the container,
	I1206 10:45:05.473465  399286 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1206 10:45:05.473469  399286 command_runner.go:130] > # enable_pod_events = false
	I1206 10:45:05.473476  399286 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1206 10:45:05.473483  399286 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1206 10:45:05.473487  399286 command_runner.go:130] > # default_runtime = "crun"
	I1206 10:45:05.473492  399286 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1206 10:45:05.473502  399286 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1206 10:45:05.473513  399286 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1206 10:45:05.473521  399286 command_runner.go:130] > # creation as a file is not desired either.
	I1206 10:45:05.473531  399286 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1206 10:45:05.473540  399286 command_runner.go:130] > # the hostname is being managed dynamically.
	I1206 10:45:05.473551  399286 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1206 10:45:05.473554  399286 command_runner.go:130] > # ]
	I1206 10:45:05.473561  399286 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1206 10:45:05.473567  399286 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1206 10:45:05.473576  399286 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1206 10:45:05.473582  399286 command_runner.go:130] > # Each entry in the table should follow the format:
	I1206 10:45:05.473596  399286 command_runner.go:130] > #
	I1206 10:45:05.473602  399286 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1206 10:45:05.473606  399286 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1206 10:45:05.473610  399286 command_runner.go:130] > # runtime_type = "oci"
	I1206 10:45:05.473616  399286 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1206 10:45:05.473623  399286 command_runner.go:130] > # inherit_default_runtime = false
	I1206 10:45:05.473628  399286 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1206 10:45:05.473632  399286 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1206 10:45:05.473646  399286 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1206 10:45:05.473650  399286 command_runner.go:130] > # monitor_env = []
	I1206 10:45:05.473654  399286 command_runner.go:130] > # privileged_without_host_devices = false
	I1206 10:45:05.473659  399286 command_runner.go:130] > # allowed_annotations = []
	I1206 10:45:05.473667  399286 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1206 10:45:05.473673  399286 command_runner.go:130] > # no_sync_log = false
	I1206 10:45:05.473677  399286 command_runner.go:130] > # default_annotations = {}
	I1206 10:45:05.473682  399286 command_runner.go:130] > # stream_websockets = false
	I1206 10:45:05.473689  399286 command_runner.go:130] > # seccomp_profile = ""
	I1206 10:45:05.473708  399286 command_runner.go:130] > # Where:
	I1206 10:45:05.473717  399286 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1206 10:45:05.473724  399286 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1206 10:45:05.473730  399286 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1206 10:45:05.473739  399286 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1206 10:45:05.473743  399286 command_runner.go:130] > #   in $PATH.
	I1206 10:45:05.473749  399286 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1206 10:45:05.473754  399286 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1206 10:45:05.473763  399286 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1206 10:45:05.473768  399286 command_runner.go:130] > #   state.
	I1206 10:45:05.473775  399286 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1206 10:45:05.473789  399286 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1206 10:45:05.473796  399286 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1206 10:45:05.473802  399286 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1206 10:45:05.473810  399286 command_runner.go:130] > #   the values from the default runtime on load time.
	I1206 10:45:05.473816  399286 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1206 10:45:05.473824  399286 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1206 10:45:05.473834  399286 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1206 10:45:05.473841  399286 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1206 10:45:05.473846  399286 command_runner.go:130] > #   The currently recognized values are:
	I1206 10:45:05.473852  399286 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1206 10:45:05.473862  399286 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1206 10:45:05.473868  399286 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1206 10:45:05.473876  399286 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1206 10:45:05.473890  399286 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1206 10:45:05.473900  399286 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1206 10:45:05.473907  399286 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1206 10:45:05.473914  399286 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1206 10:45:05.473924  399286 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1206 10:45:05.473930  399286 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1206 10:45:05.473938  399286 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1206 10:45:05.473946  399286 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1206 10:45:05.473955  399286 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1206 10:45:05.473961  399286 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1206 10:45:05.473970  399286 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1206 10:45:05.473978  399286 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1206 10:45:05.473988  399286 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1206 10:45:05.473992  399286 command_runner.go:130] > #   deprecated option "conmon".
	I1206 10:45:05.474000  399286 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1206 10:45:05.474008  399286 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1206 10:45:05.474015  399286 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1206 10:45:05.474020  399286 command_runner.go:130] > #   should be moved to the container's cgroup
	I1206 10:45:05.474027  399286 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1206 10:45:05.474034  399286 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1206 10:45:05.474042  399286 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1206 10:45:05.474048  399286 command_runner.go:130] > #   conmon-rs by using:
	I1206 10:45:05.474057  399286 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1206 10:45:05.474070  399286 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1206 10:45:05.474077  399286 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1206 10:45:05.474091  399286 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1206 10:45:05.474096  399286 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1206 10:45:05.474106  399286 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1206 10:45:05.474114  399286 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1206 10:45:05.474122  399286 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1206 10:45:05.474130  399286 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1206 10:45:05.474143  399286 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1206 10:45:05.474148  399286 command_runner.go:130] > #   when a machine crash happens.
	I1206 10:45:05.474159  399286 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1206 10:45:05.474172  399286 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1206 10:45:05.474181  399286 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1206 10:45:05.474188  399286 command_runner.go:130] > #   seccomp profile for the runtime.
	I1206 10:45:05.474212  399286 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1206 10:45:05.474223  399286 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1206 10:45:05.474227  399286 command_runner.go:130] > #
	I1206 10:45:05.474233  399286 command_runner.go:130] > # Using the seccomp notifier feature:
	I1206 10:45:05.474236  399286 command_runner.go:130] > #
	I1206 10:45:05.474244  399286 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1206 10:45:05.474254  399286 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1206 10:45:05.474257  399286 command_runner.go:130] > #
	I1206 10:45:05.474264  399286 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1206 10:45:05.474273  399286 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1206 10:45:05.474276  399286 command_runner.go:130] > #
	I1206 10:45:05.474283  399286 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1206 10:45:05.474286  399286 command_runner.go:130] > # feature.
	I1206 10:45:05.474289  399286 command_runner.go:130] > #
	I1206 10:45:05.474299  399286 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1206 10:45:05.474307  399286 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1206 10:45:05.474314  399286 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1206 10:45:05.474322  399286 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1206 10:45:05.474329  399286 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1206 10:45:05.474336  399286 command_runner.go:130] > #
	I1206 10:45:05.474344  399286 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1206 10:45:05.474350  399286 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1206 10:45:05.474354  399286 command_runner.go:130] > #
	I1206 10:45:05.474361  399286 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1206 10:45:05.474371  399286 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1206 10:45:05.474374  399286 command_runner.go:130] > #
	I1206 10:45:05.474380  399286 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1206 10:45:05.474386  399286 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1206 10:45:05.474392  399286 command_runner.go:130] > # limitation.
	I1206 10:45:05.474401  399286 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1206 10:45:05.474409  399286 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1206 10:45:05.474413  399286 command_runner.go:130] > runtime_type = ""
	I1206 10:45:05.474417  399286 command_runner.go:130] > runtime_root = "/run/crun"
	I1206 10:45:05.474422  399286 command_runner.go:130] > inherit_default_runtime = false
	I1206 10:45:05.474426  399286 command_runner.go:130] > runtime_config_path = ""
	I1206 10:45:05.474432  399286 command_runner.go:130] > container_min_memory = ""
	I1206 10:45:05.474437  399286 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1206 10:45:05.474442  399286 command_runner.go:130] > monitor_cgroup = "pod"
	I1206 10:45:05.474448  399286 command_runner.go:130] > monitor_exec_cgroup = ""
	I1206 10:45:05.474453  399286 command_runner.go:130] > allowed_annotations = [
	I1206 10:45:05.474461  399286 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1206 10:45:05.474464  399286 command_runner.go:130] > ]
	I1206 10:45:05.474469  399286 command_runner.go:130] > privileged_without_host_devices = false
	I1206 10:45:05.474473  399286 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1206 10:45:05.474478  399286 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1206 10:45:05.474484  399286 command_runner.go:130] > runtime_type = ""
	I1206 10:45:05.474489  399286 command_runner.go:130] > runtime_root = "/run/runc"
	I1206 10:45:05.474496  399286 command_runner.go:130] > inherit_default_runtime = false
	I1206 10:45:05.474501  399286 command_runner.go:130] > runtime_config_path = ""
	I1206 10:45:05.474506  399286 command_runner.go:130] > container_min_memory = ""
	I1206 10:45:05.474513  399286 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1206 10:45:05.474518  399286 command_runner.go:130] > monitor_cgroup = "pod"
	I1206 10:45:05.474522  399286 command_runner.go:130] > monitor_exec_cgroup = ""
	I1206 10:45:05.474530  399286 command_runner.go:130] > privileged_without_host_devices = false
	I1206 10:45:05.474540  399286 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1206 10:45:05.474548  399286 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1206 10:45:05.474556  399286 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1206 10:45:05.474564  399286 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1206 10:45:05.474575  399286 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1206 10:45:05.474592  399286 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1206 10:45:05.474602  399286 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1206 10:45:05.474610  399286 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1206 10:45:05.474622  399286 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1206 10:45:05.474635  399286 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1206 10:45:05.474642  399286 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1206 10:45:05.474652  399286 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1206 10:45:05.474656  399286 command_runner.go:130] > # Example:
	I1206 10:45:05.474664  399286 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1206 10:45:05.474672  399286 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1206 10:45:05.474677  399286 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1206 10:45:05.474686  399286 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1206 10:45:05.474691  399286 command_runner.go:130] > # cpuset = "0-1"
	I1206 10:45:05.474703  399286 command_runner.go:130] > # cpushares = "5"
	I1206 10:45:05.474708  399286 command_runner.go:130] > # cpuquota = "1000"
	I1206 10:45:05.474712  399286 command_runner.go:130] > # cpuperiod = "100000"
	I1206 10:45:05.474716  399286 command_runner.go:130] > # cpulimit = "35"
	I1206 10:45:05.474720  399286 command_runner.go:130] > # Where:
	I1206 10:45:05.474724  399286 command_runner.go:130] > # The workload name is workload-type.
	I1206 10:45:05.474738  399286 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1206 10:45:05.474744  399286 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1206 10:45:05.474749  399286 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1206 10:45:05.474761  399286 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1206 10:45:05.474777  399286 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1206 10:45:05.474783  399286 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1206 10:45:05.474790  399286 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1206 10:45:05.474797  399286 command_runner.go:130] > # Default value is set to true
	I1206 10:45:05.474803  399286 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1206 10:45:05.474809  399286 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1206 10:45:05.474821  399286 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1206 10:45:05.474826  399286 command_runner.go:130] > # Default value is set to 'false'
	I1206 10:45:05.474830  399286 command_runner.go:130] > # disable_hostport_mapping = false
	I1206 10:45:05.474836  399286 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1206 10:45:05.474847  399286 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1206 10:45:05.474853  399286 command_runner.go:130] > # timezone = ""
	I1206 10:45:05.474860  399286 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1206 10:45:05.474866  399286 command_runner.go:130] > #
	I1206 10:45:05.474874  399286 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1206 10:45:05.474883  399286 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1206 10:45:05.474889  399286 command_runner.go:130] > [crio.image]
	I1206 10:45:05.474895  399286 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1206 10:45:05.474899  399286 command_runner.go:130] > # default_transport = "docker://"
	I1206 10:45:05.474913  399286 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1206 10:45:05.474920  399286 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1206 10:45:05.474924  399286 command_runner.go:130] > # global_auth_file = ""
	I1206 10:45:05.474929  399286 command_runner.go:130] > # The image used to instantiate infra containers.
	I1206 10:45:05.474938  399286 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:45:05.474943  399286 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1206 10:45:05.474952  399286 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1206 10:45:05.474959  399286 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1206 10:45:05.474967  399286 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:45:05.474972  399286 command_runner.go:130] > # pause_image_auth_file = ""
	I1206 10:45:05.474977  399286 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1206 10:45:05.474984  399286 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1206 10:45:05.474994  399286 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1206 10:45:05.475000  399286 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1206 10:45:05.475009  399286 command_runner.go:130] > # pause_command = "/pause"
	I1206 10:45:05.475015  399286 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1206 10:45:05.475021  399286 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1206 10:45:05.475030  399286 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1206 10:45:05.475036  399286 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1206 10:45:05.475044  399286 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1206 10:45:05.475051  399286 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1206 10:45:05.475058  399286 command_runner.go:130] > # pinned_images = [
	I1206 10:45:05.475061  399286 command_runner.go:130] > # ]
	I1206 10:45:05.475067  399286 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1206 10:45:05.475074  399286 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1206 10:45:05.475083  399286 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1206 10:45:05.475090  399286 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1206 10:45:05.475098  399286 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1206 10:45:05.475104  399286 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1206 10:45:05.475110  399286 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1206 10:45:05.475120  399286 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1206 10:45:05.475129  399286 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1206 10:45:05.475138  399286 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1206 10:45:05.475145  399286 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1206 10:45:05.475150  399286 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1206 10:45:05.475156  399286 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1206 10:45:05.475165  399286 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1206 10:45:05.475169  399286 command_runner.go:130] > # changing them here.
	I1206 10:45:05.475176  399286 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1206 10:45:05.475183  399286 command_runner.go:130] > # insecure_registries = [
	I1206 10:45:05.475186  399286 command_runner.go:130] > # ]
	I1206 10:45:05.475193  399286 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1206 10:45:05.475201  399286 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1206 10:45:05.475208  399286 command_runner.go:130] > # image_volumes = "mkdir"
	I1206 10:45:05.475214  399286 command_runner.go:130] > # Temporary directory to use for storing big files
	I1206 10:45:05.475220  399286 command_runner.go:130] > # big_files_temporary_dir = ""
	I1206 10:45:05.475226  399286 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1206 10:45:05.475236  399286 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1206 10:45:05.475241  399286 command_runner.go:130] > # auto_reload_registries = false
	I1206 10:45:05.475247  399286 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1206 10:45:05.475257  399286 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1206 10:45:05.475267  399286 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1206 10:45:05.475271  399286 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1206 10:45:05.475277  399286 command_runner.go:130] > # The mode of short name resolution.
	I1206 10:45:05.475284  399286 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1206 10:45:05.475293  399286 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1206 10:45:05.475298  399286 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1206 10:45:05.475314  399286 command_runner.go:130] > # short_name_mode = "enforcing"
	I1206 10:45:05.475321  399286 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1206 10:45:05.475327  399286 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1206 10:45:05.475335  399286 command_runner.go:130] > # oci_artifact_mount_support = true
	I1206 10:45:05.475343  399286 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1206 10:45:05.475349  399286 command_runner.go:130] > # CNI plugins.
	I1206 10:45:05.475353  399286 command_runner.go:130] > [crio.network]
	I1206 10:45:05.475360  399286 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1206 10:45:05.475368  399286 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1206 10:45:05.475386  399286 command_runner.go:130] > # cni_default_network = ""
	I1206 10:45:05.475398  399286 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1206 10:45:05.475407  399286 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1206 10:45:05.475413  399286 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1206 10:45:05.475419  399286 command_runner.go:130] > # plugin_dirs = [
	I1206 10:45:05.475424  399286 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1206 10:45:05.475429  399286 command_runner.go:130] > # ]
	I1206 10:45:05.475434  399286 command_runner.go:130] > # List of included pod metrics.
	I1206 10:45:05.475441  399286 command_runner.go:130] > # included_pod_metrics = [
	I1206 10:45:05.475445  399286 command_runner.go:130] > # ]
	I1206 10:45:05.475451  399286 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1206 10:45:05.475457  399286 command_runner.go:130] > [crio.metrics]
	I1206 10:45:05.475463  399286 command_runner.go:130] > # Globally enable or disable metrics support.
	I1206 10:45:05.475467  399286 command_runner.go:130] > # enable_metrics = false
	I1206 10:45:05.475472  399286 command_runner.go:130] > # Specify enabled metrics collectors.
	I1206 10:45:05.475476  399286 command_runner.go:130] > # Per default all metrics are enabled.
	I1206 10:45:05.475483  399286 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1206 10:45:05.475490  399286 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1206 10:45:05.475497  399286 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1206 10:45:05.475501  399286 command_runner.go:130] > # metrics_collectors = [
	I1206 10:45:05.475505  399286 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1206 10:45:05.475510  399286 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1206 10:45:05.475518  399286 command_runner.go:130] > # 	"containers_oom_total",
	I1206 10:45:05.475522  399286 command_runner.go:130] > # 	"processes_defunct",
	I1206 10:45:05.475528  399286 command_runner.go:130] > # 	"operations_total",
	I1206 10:45:05.475533  399286 command_runner.go:130] > # 	"operations_latency_seconds",
	I1206 10:45:05.475540  399286 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1206 10:45:05.475547  399286 command_runner.go:130] > # 	"operations_errors_total",
	I1206 10:45:05.475554  399286 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1206 10:45:05.475559  399286 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1206 10:45:05.475564  399286 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1206 10:45:05.475571  399286 command_runner.go:130] > # 	"image_pulls_success_total",
	I1206 10:45:05.475576  399286 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1206 10:45:05.475583  399286 command_runner.go:130] > # 	"containers_oom_count_total",
	I1206 10:45:05.475590  399286 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1206 10:45:05.475602  399286 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1206 10:45:05.475607  399286 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1206 10:45:05.475610  399286 command_runner.go:130] > # ]
	I1206 10:45:05.475616  399286 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1206 10:45:05.475620  399286 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1206 10:45:05.475626  399286 command_runner.go:130] > # The port on which the metrics server will listen.
	I1206 10:45:05.475639  399286 command_runner.go:130] > # metrics_port = 9090
	I1206 10:45:05.475646  399286 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1206 10:45:05.475649  399286 command_runner.go:130] > # metrics_socket = ""
	I1206 10:45:05.475657  399286 command_runner.go:130] > # The certificate for the secure metrics server.
	I1206 10:45:05.475670  399286 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1206 10:45:05.475677  399286 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1206 10:45:05.475691  399286 command_runner.go:130] > # certificate on any modification event.
	I1206 10:45:05.475695  399286 command_runner.go:130] > # metrics_cert = ""
	I1206 10:45:05.475703  399286 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1206 10:45:05.475708  399286 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1206 10:45:05.475712  399286 command_runner.go:130] > # metrics_key = ""
	I1206 10:45:05.475720  399286 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1206 10:45:05.475727  399286 command_runner.go:130] > [crio.tracing]
	I1206 10:45:05.475732  399286 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1206 10:45:05.475737  399286 command_runner.go:130] > # enable_tracing = false
	I1206 10:45:05.475748  399286 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1206 10:45:05.475753  399286 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1206 10:45:05.475767  399286 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1206 10:45:05.475772  399286 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1206 10:45:05.475781  399286 command_runner.go:130] > # CRI-O NRI configuration.
	I1206 10:45:05.475784  399286 command_runner.go:130] > [crio.nri]
	I1206 10:45:05.475789  399286 command_runner.go:130] > # Globally enable or disable NRI.
	I1206 10:45:05.475792  399286 command_runner.go:130] > # enable_nri = true
	I1206 10:45:05.475799  399286 command_runner.go:130] > # NRI socket to listen on.
	I1206 10:45:05.475804  399286 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1206 10:45:05.475811  399286 command_runner.go:130] > # NRI plugin directory to use.
	I1206 10:45:05.475817  399286 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1206 10:45:05.475825  399286 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1206 10:45:05.475830  399286 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1206 10:45:05.475835  399286 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1206 10:45:05.475891  399286 command_runner.go:130] > # nri_disable_connections = false
	I1206 10:45:05.475901  399286 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1206 10:45:05.475906  399286 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1206 10:45:05.475911  399286 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1206 10:45:05.475918  399286 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1206 10:45:05.475923  399286 command_runner.go:130] > # NRI default validator configuration.
	I1206 10:45:05.475933  399286 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1206 10:45:05.475940  399286 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1206 10:45:05.475946  399286 command_runner.go:130] > # can be restricted/rejected:
	I1206 10:45:05.475950  399286 command_runner.go:130] > # - OCI hook injection
	I1206 10:45:05.475958  399286 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1206 10:45:05.475964  399286 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1206 10:45:05.475969  399286 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1206 10:45:05.475976  399286 command_runner.go:130] > # - adjustment of linux namespaces
	I1206 10:45:05.475983  399286 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1206 10:45:05.475990  399286 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1206 10:45:05.476000  399286 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1206 10:45:05.476005  399286 command_runner.go:130] > #
	I1206 10:45:05.476012  399286 command_runner.go:130] > # [crio.nri.default_validator]
	I1206 10:45:05.476020  399286 command_runner.go:130] > # nri_enable_default_validator = false
	I1206 10:45:05.476026  399286 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1206 10:45:05.476035  399286 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1206 10:45:05.476042  399286 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1206 10:45:05.476048  399286 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1206 10:45:05.476056  399286 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1206 10:45:05.476061  399286 command_runner.go:130] > # nri_validator_required_plugins = [
	I1206 10:45:05.476064  399286 command_runner.go:130] > # ]
	I1206 10:45:05.476070  399286 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1206 10:45:05.476079  399286 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1206 10:45:05.476083  399286 command_runner.go:130] > [crio.stats]
	I1206 10:45:05.476089  399286 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1206 10:45:05.476095  399286 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1206 10:45:05.476102  399286 command_runner.go:130] > # stats_collection_period = 0
	I1206 10:45:05.476109  399286 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1206 10:45:05.476119  399286 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1206 10:45:05.476124  399286 command_runner.go:130] > # collection_period = 0
	I1206 10:45:05.476211  399286 cni.go:84] Creating CNI manager for ""
	I1206 10:45:05.476226  399286 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:45:05.476254  399286 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 10:45:05.476282  399286 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-196950 NodeName:functional-196950 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:45:05.476417  399286 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-196950"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:45:05.476505  399286 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 10:45:05.483758  399286 command_runner.go:130] > kubeadm
	I1206 10:45:05.483779  399286 command_runner.go:130] > kubectl
	I1206 10:45:05.483784  399286 command_runner.go:130] > kubelet
	I1206 10:45:05.484784  399286 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:45:05.484852  399286 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:45:05.492924  399286 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1206 10:45:05.506239  399286 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 10:45:05.519506  399286 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1206 10:45:05.533524  399286 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:45:05.537326  399286 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1206 10:45:05.537418  399286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:45:05.647140  399286 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:45:05.721344  399286 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950 for IP: 192.168.49.2
	I1206 10:45:05.721367  399286 certs.go:195] generating shared ca certs ...
	I1206 10:45:05.721384  399286 certs.go:227] acquiring lock for ca certs: {Name:mke2ec61a37b6f3abbcbeb9abd23d6a19d011dd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:45:05.721593  399286 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.key
	I1206 10:45:05.721667  399286 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.key
	I1206 10:45:05.721683  399286 certs.go:257] generating profile certs ...
	I1206 10:45:05.721813  399286 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.key
	I1206 10:45:05.721910  399286 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.key.a77b39a6
	I1206 10:45:05.721994  399286 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.key
	I1206 10:45:05.722034  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1206 10:45:05.722057  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1206 10:45:05.722073  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1206 10:45:05.722118  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1206 10:45:05.722158  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1206 10:45:05.722199  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1206 10:45:05.722217  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1206 10:45:05.722228  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1206 10:45:05.722301  399286 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855.pem (1338 bytes)
	W1206 10:45:05.722365  399286 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855_empty.pem, impossibly tiny 0 bytes
	I1206 10:45:05.722388  399286 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 10:45:05.722448  399286 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem (1082 bytes)
	I1206 10:45:05.722502  399286 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:45:05.722537  399286 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem (1679 bytes)
	I1206 10:45:05.722611  399286 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem (1708 bytes)
	I1206 10:45:05.722670  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:45:05.722691  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855.pem -> /usr/share/ca-certificates/364855.pem
	I1206 10:45:05.722718  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem -> /usr/share/ca-certificates/3648552.pem
	I1206 10:45:05.723349  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:45:05.743026  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 10:45:05.763126  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:45:05.783337  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 10:45:05.802756  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 10:45:05.821457  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 10:45:05.839993  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:45:05.858402  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 10:45:05.876528  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:45:05.894729  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855.pem --> /usr/share/ca-certificates/364855.pem (1338 bytes)
	I1206 10:45:05.912947  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem --> /usr/share/ca-certificates/3648552.pem (1708 bytes)
	I1206 10:45:05.931356  399286 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:45:05.945284  399286 ssh_runner.go:195] Run: openssl version
	I1206 10:45:05.951573  399286 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1206 10:45:05.951648  399286 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:45:05.959293  399286 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:45:05.967114  399286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:45:05.970832  399286 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  6 10:26 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:45:05.971103  399286 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:26 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:45:05.971168  399286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:45:06.014236  399286 command_runner.go:130] > b5213941
	I1206 10:45:06.014768  399286 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:45:06.023097  399286 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364855.pem
	I1206 10:45:06.030984  399286 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364855.pem /etc/ssl/certs/364855.pem
	I1206 10:45:06.039316  399286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364855.pem
	I1206 10:45:06.043457  399286 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  6 10:36 /usr/share/ca-certificates/364855.pem
	I1206 10:45:06.043549  399286 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:36 /usr/share/ca-certificates/364855.pem
	I1206 10:45:06.043624  399286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364855.pem
	I1206 10:45:06.084760  399286 command_runner.go:130] > 51391683
	I1206 10:45:06.084914  399286 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 10:45:06.092772  399286 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648552.pem
	I1206 10:45:06.100248  399286 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648552.pem /etc/ssl/certs/3648552.pem
	I1206 10:45:06.107970  399286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648552.pem
	I1206 10:45:06.112031  399286 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  6 10:36 /usr/share/ca-certificates/3648552.pem
	I1206 10:45:06.112134  399286 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:36 /usr/share/ca-certificates/3648552.pem
	I1206 10:45:06.112229  399286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648552.pem
	I1206 10:45:06.152822  399286 command_runner.go:130] > 3ec20f2e
	I1206 10:45:06.153315  399286 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 10:45:06.161105  399286 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:45:06.165043  399286 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:45:06.165068  399286 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1206 10:45:06.165075  399286 command_runner.go:130] > Device: 259,1	Inode: 1826360     Links: 1
	I1206 10:45:06.165081  399286 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 10:45:06.165087  399286 command_runner.go:130] > Access: 2025-12-06 10:40:58.003190996 +0000
	I1206 10:45:06.165092  399286 command_runner.go:130] > Modify: 2025-12-06 10:36:53.916464205 +0000
	I1206 10:45:06.165098  399286 command_runner.go:130] > Change: 2025-12-06 10:36:53.916464205 +0000
	I1206 10:45:06.165103  399286 command_runner.go:130] >  Birth: 2025-12-06 10:36:53.916464205 +0000
	I1206 10:45:06.165195  399286 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 10:45:06.207365  399286 command_runner.go:130] > Certificate will not expire
	I1206 10:45:06.207850  399286 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 10:45:06.248448  399286 command_runner.go:130] > Certificate will not expire
	I1206 10:45:06.248932  399286 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 10:45:06.289656  399286 command_runner.go:130] > Certificate will not expire
	I1206 10:45:06.290116  399286 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 10:45:06.330828  399286 command_runner.go:130] > Certificate will not expire
	I1206 10:45:06.331412  399286 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 10:45:06.372096  399286 command_runner.go:130] > Certificate will not expire
	I1206 10:45:06.372595  399286 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 10:45:06.413596  399286 command_runner.go:130] > Certificate will not expire
	I1206 10:45:06.414056  399286 kubeadm.go:401] StartCluster: {Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:45:06.414151  399286 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:45:06.414217  399286 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:45:06.442677  399286 cri.go:89] found id: ""
	I1206 10:45:06.442751  399286 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:45:06.449938  399286 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1206 10:45:06.449962  399286 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1206 10:45:06.449969  399286 command_runner.go:130] > /var/lib/minikube/etcd:
	I1206 10:45:06.450931  399286 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 10:45:06.450952  399286 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 10:45:06.451032  399286 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 10:45:06.459080  399286 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:45:06.459618  399286 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-196950" does not appear in /home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 10:45:06.459742  399286 kubeconfig.go:62] /home/jenkins/minikube-integration/22047-362985/kubeconfig needs updating (will repair): [kubeconfig missing "functional-196950" cluster setting kubeconfig missing "functional-196950" context setting]
	I1206 10:45:06.460016  399286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/kubeconfig: {Name:mk779651834cfbdc6f0b5e8f5a9abc0f05106181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:45:06.460484  399286 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 10:45:06.460638  399286 kapi.go:59] client config for functional-196950: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.crt", KeyFile:"/home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.key", CAFile:"/home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 10:45:06.461238  399286 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1206 10:45:06.461268  399286 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1206 10:45:06.461280  399286 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1206 10:45:06.461291  399286 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1206 10:45:06.461295  399286 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1206 10:45:06.461337  399286 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1206 10:45:06.461637  399286 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 10:45:06.473548  399286 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1206 10:45:06.473584  399286 kubeadm.go:602] duration metric: took 22.626231ms to restartPrimaryControlPlane
	I1206 10:45:06.473594  399286 kubeadm.go:403] duration metric: took 59.544914ms to StartCluster
	I1206 10:45:06.473609  399286 settings.go:142] acquiring lock: {Name:mk789e01bfd4ab9fa1e2a8415fa99b570b26926a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:45:06.473671  399286 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 10:45:06.474312  399286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/kubeconfig: {Name:mk779651834cfbdc6f0b5e8f5a9abc0f05106181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:45:06.474518  399286 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 10:45:06.474963  399286 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 10:45:06.475042  399286 addons.go:70] Setting storage-provisioner=true in profile "functional-196950"
	I1206 10:45:06.475066  399286 addons.go:239] Setting addon storage-provisioner=true in "functional-196950"
	I1206 10:45:06.475092  399286 host.go:66] Checking if "functional-196950" exists ...
	I1206 10:45:06.475912  399286 cli_runner.go:164] Run: docker container inspect functional-196950 --format={{.State.Status}}
	I1206 10:45:06.476264  399286 config.go:182] Loaded profile config "functional-196950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:45:06.476354  399286 addons.go:70] Setting default-storageclass=true in profile "functional-196950"
	I1206 10:45:06.476394  399286 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-196950"
	I1206 10:45:06.476791  399286 cli_runner.go:164] Run: docker container inspect functional-196950 --format={{.State.Status}}
	I1206 10:45:06.481213  399286 out.go:179] * Verifying Kubernetes components...
	I1206 10:45:06.484465  399286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:45:06.517764  399286 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 10:45:06.517930  399286 kapi.go:59] client config for functional-196950: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.crt", KeyFile:"/home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.key", CAFile:"/home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 10:45:06.518202  399286 addons.go:239] Setting addon default-storageclass=true in "functional-196950"
	I1206 10:45:06.518232  399286 host.go:66] Checking if "functional-196950" exists ...
	I1206 10:45:06.518684  399286 cli_runner.go:164] Run: docker container inspect functional-196950 --format={{.State.Status}}
	I1206 10:45:06.522254  399286 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 10:45:06.525206  399286 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:06.525232  399286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 10:45:06.525299  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:06.551517  399286 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:06.551540  399286 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 10:45:06.551605  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:06.570954  399286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:45:06.593327  399286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:45:06.685314  399286 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:45:06.722168  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:06.737572  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:07.472063  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:07.472098  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:07.472124  399286 retry.go:31] will retry after 153.213078ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:07.472168  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:07.472179  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:07.472186  399286 retry.go:31] will retry after 247.840204ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:07.472279  399286 node_ready.go:35] waiting up to 6m0s for node "functional-196950" to be "Ready" ...
	I1206 10:45:07.472418  399286 type.go:168] "Request Body" body=""
	I1206 10:45:07.472509  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:07.472828  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:07.626184  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:07.684274  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:07.688010  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:07.688045  399286 retry.go:31] will retry after 503.005947ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:07.720209  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:07.781565  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:07.785057  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:07.785089  399286 retry.go:31] will retry after 443.254463ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:07.973439  399286 type.go:168] "Request Body" body=""
	I1206 10:45:07.973529  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:07.974023  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:08.191658  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:08.229200  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:08.274450  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:08.282645  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:08.282730  399286 retry.go:31] will retry after 342.048952ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:08.327096  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:08.327147  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:08.327166  399286 retry.go:31] will retry after 504.811759ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:08.473470  399286 type.go:168] "Request Body" body=""
	I1206 10:45:08.473573  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:08.473913  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:08.625427  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:08.684176  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:08.687968  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:08.688010  399286 retry.go:31] will retry after 1.261411242s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:08.832256  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:08.891180  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:08.894801  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:08.894836  399286 retry.go:31] will retry after 546.340513ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:08.973077  399286 type.go:168] "Request Body" body=""
	I1206 10:45:08.973155  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:08.973522  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:09.442273  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:09.472729  399286 type.go:168] "Request Body" body=""
	I1206 10:45:09.472803  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:09.473092  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:09.473139  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:09.506571  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:09.510870  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:09.510955  399286 retry.go:31] will retry after 985.837399ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:09.950606  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:09.973212  399286 type.go:168] "Request Body" body=""
	I1206 10:45:09.973298  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:09.973577  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:10.030286  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:10.030402  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:10.030452  399286 retry.go:31] will retry after 829.97822ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:10.472519  399286 type.go:168] "Request Body" body=""
	I1206 10:45:10.472588  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:10.472971  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:10.497156  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:10.582698  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:10.582757  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:10.582779  399286 retry.go:31] will retry after 2.303396874s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:10.861265  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:10.923027  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:10.923124  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:10.923150  399286 retry.go:31] will retry after 2.722563752s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:10.973315  399286 type.go:168] "Request Body" body=""
	I1206 10:45:10.973396  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:10.973700  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:11.473530  399286 type.go:168] "Request Body" body=""
	I1206 10:45:11.473608  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:11.474011  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:11.474073  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:11.972906  399286 type.go:168] "Request Body" body=""
	I1206 10:45:11.972979  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:11.973246  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:12.472617  399286 type.go:168] "Request Body" body=""
	I1206 10:45:12.472696  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:12.473071  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:12.886451  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:12.946418  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:12.951114  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:12.951151  399286 retry.go:31] will retry after 2.435253477s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:12.973196  399286 type.go:168] "Request Body" body=""
	I1206 10:45:12.973267  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:12.973628  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:13.473384  399286 type.go:168] "Request Body" body=""
	I1206 10:45:13.473455  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:13.473719  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:13.646250  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:13.707346  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:13.707418  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:13.707442  399286 retry.go:31] will retry after 2.81497333s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:13.972564  399286 type.go:168] "Request Body" body=""
	I1206 10:45:13.972648  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:13.972980  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:13.973040  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:14.472608  399286 type.go:168] "Request Body" body=""
	I1206 10:45:14.472684  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:14.473066  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:14.972534  399286 type.go:168] "Request Body" body=""
	I1206 10:45:14.972625  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:14.972955  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:15.386668  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:15.447515  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:15.447555  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:15.447573  399286 retry.go:31] will retry after 2.327509257s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:15.472847  399286 type.go:168] "Request Body" body=""
	I1206 10:45:15.472922  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:15.473272  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:15.973226  399286 type.go:168] "Request Body" body=""
	I1206 10:45:15.973305  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:15.973654  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:15.973708  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:16.473465  399286 type.go:168] "Request Body" body=""
	I1206 10:45:16.473539  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:16.473810  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:16.523188  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:16.580568  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:16.584128  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:16.584161  399286 retry.go:31] will retry after 3.565207529s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:16.972816  399286 type.go:168] "Request Body" body=""
	I1206 10:45:16.972893  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:16.973236  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:17.472948  399286 type.go:168] "Request Body" body=""
	I1206 10:45:17.473028  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:17.473355  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:17.775942  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:17.833742  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:17.838032  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:17.838073  399286 retry.go:31] will retry after 9.046125485s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:17.973259  399286 type.go:168] "Request Body" body=""
	I1206 10:45:17.973333  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:17.973605  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:18.473464  399286 type.go:168] "Request Body" body=""
	I1206 10:45:18.473544  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:18.473887  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:18.473936  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:18.972571  399286 type.go:168] "Request Body" body=""
	I1206 10:45:18.972668  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:18.973005  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:19.472497  399286 type.go:168] "Request Body" body=""
	I1206 10:45:19.472590  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:19.472870  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:19.972598  399286 type.go:168] "Request Body" body=""
	I1206 10:45:19.972674  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:19.972970  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:20.150467  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:20.215833  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:20.215885  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:20.215905  399286 retry.go:31] will retry after 9.222024728s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:20.473247  399286 type.go:168] "Request Body" body=""
	I1206 10:45:20.473322  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:20.473670  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:20.973445  399286 type.go:168] "Request Body" body=""
	I1206 10:45:20.973528  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:20.973801  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:20.973861  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:21.472555  399286 type.go:168] "Request Body" body=""
	I1206 10:45:21.472664  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:21.473020  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:21.972799  399286 type.go:168] "Request Body" body=""
	I1206 10:45:21.972877  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:21.973219  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:22.472497  399286 type.go:168] "Request Body" body=""
	I1206 10:45:22.472576  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:22.472904  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:22.972589  399286 type.go:168] "Request Body" body=""
	I1206 10:45:22.972674  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:22.973015  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:23.472753  399286 type.go:168] "Request Body" body=""
	I1206 10:45:23.472835  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:23.473181  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:23.473243  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:23.972733  399286 type.go:168] "Request Body" body=""
	I1206 10:45:23.972804  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:23.973079  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:24.472742  399286 type.go:168] "Request Body" body=""
	I1206 10:45:24.472825  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:24.473193  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:24.972805  399286 type.go:168] "Request Body" body=""
	I1206 10:45:24.972890  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:24.973299  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:25.473054  399286 type.go:168] "Request Body" body=""
	I1206 10:45:25.473127  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:25.473403  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:25.473453  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:25.972761  399286 type.go:168] "Request Body" body=""
	I1206 10:45:25.972834  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:25.973177  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:26.473056  399286 type.go:168] "Request Body" body=""
	I1206 10:45:26.473132  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:26.473476  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:26.884353  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:26.943184  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:26.947029  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:26.947062  399286 retry.go:31] will retry after 13.756266916s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:26.973239  399286 type.go:168] "Request Body" body=""
	I1206 10:45:26.973309  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:26.973589  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:27.473507  399286 type.go:168] "Request Body" body=""
	I1206 10:45:27.473585  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:27.473949  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:27.474006  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:27.972689  399286 type.go:168] "Request Body" body=""
	I1206 10:45:27.972763  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:27.973145  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:28.472835  399286 type.go:168] "Request Body" body=""
	I1206 10:45:28.472909  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:28.473194  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:28.972604  399286 type.go:168] "Request Body" body=""
	I1206 10:45:28.972682  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:28.972972  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:29.438741  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:29.473252  399286 type.go:168] "Request Body" body=""
	I1206 10:45:29.473342  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:29.473619  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:29.500011  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:29.500052  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:29.500073  399286 retry.go:31] will retry after 11.458105653s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:29.972514  399286 type.go:168] "Request Body" body=""
	I1206 10:45:29.972601  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:29.972925  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:29.972975  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:30.472573  399286 type.go:168] "Request Body" body=""
	I1206 10:45:30.472647  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:30.472967  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:30.972595  399286 type.go:168] "Request Body" body=""
	I1206 10:45:30.972703  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:30.973084  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:31.472784  399286 type.go:168] "Request Body" body=""
	I1206 10:45:31.472855  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:31.473199  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:31.972958  399286 type.go:168] "Request Body" body=""
	I1206 10:45:31.973040  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:31.973376  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:31.973432  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:32.473378  399286 type.go:168] "Request Body" body=""
	I1206 10:45:32.473454  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:32.473784  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:32.972445  399286 type.go:168] "Request Body" body=""
	I1206 10:45:32.972534  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:32.972822  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:33.472492  399286 type.go:168] "Request Body" body=""
	I1206 10:45:33.472570  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:33.472871  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:33.972494  399286 type.go:168] "Request Body" body=""
	I1206 10:45:33.972591  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:33.972945  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:34.472578  399286 type.go:168] "Request Body" body=""
	I1206 10:45:34.472650  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:34.473009  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:34.473064  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:34.972732  399286 type.go:168] "Request Body" body=""
	I1206 10:45:34.972808  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:34.973199  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:35.472761  399286 type.go:168] "Request Body" body=""
	I1206 10:45:35.472857  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:35.473192  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:35.972534  399286 type.go:168] "Request Body" body=""
	I1206 10:45:35.972619  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:35.972903  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:36.472813  399286 type.go:168] "Request Body" body=""
	I1206 10:45:36.472898  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:36.473245  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:36.473300  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:36.972930  399286 type.go:168] "Request Body" body=""
	I1206 10:45:36.973016  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:36.973389  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:37.473181  399286 type.go:168] "Request Body" body=""
	I1206 10:45:37.473253  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:37.473531  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:37.973319  399286 type.go:168] "Request Body" body=""
	I1206 10:45:37.973403  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:37.973730  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:38.472498  399286 type.go:168] "Request Body" body=""
	I1206 10:45:38.472583  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:38.472928  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:38.972624  399286 type.go:168] "Request Body" body=""
	I1206 10:45:38.972703  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:38.973126  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:38.973176  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:39.472568  399286 type.go:168] "Request Body" body=""
	I1206 10:45:39.472665  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:39.472987  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:39.972728  399286 type.go:168] "Request Body" body=""
	I1206 10:45:39.972805  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:39.973175  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:40.473384  399286 type.go:168] "Request Body" body=""
	I1206 10:45:40.473456  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:40.473714  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:40.704276  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:40.766032  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:40.766082  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:40.766102  399286 retry.go:31] will retry after 12.834175432s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:40.958402  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:40.972905  399286 type.go:168] "Request Body" body=""
	I1206 10:45:40.972992  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:40.973301  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:40.973353  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:41.030830  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:41.030878  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:41.030900  399286 retry.go:31] will retry after 14.333484689s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:41.472501  399286 type.go:168] "Request Body" body=""
	I1206 10:45:41.472600  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:41.472944  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:41.972853  399286 type.go:168] "Request Body" body=""
	I1206 10:45:41.972920  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:41.973187  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:42.472557  399286 type.go:168] "Request Body" body=""
	I1206 10:45:42.472636  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:42.472968  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:42.972558  399286 type.go:168] "Request Body" body=""
	I1206 10:45:42.972635  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:42.972937  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:43.472504  399286 type.go:168] "Request Body" body=""
	I1206 10:45:43.472579  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:43.472849  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:43.472893  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:43.972555  399286 type.go:168] "Request Body" body=""
	I1206 10:45:43.972629  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:43.972940  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:44.472608  399286 type.go:168] "Request Body" body=""
	I1206 10:45:44.472707  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:44.473088  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:44.972724  399286 type.go:168] "Request Body" body=""
	I1206 10:45:44.972794  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:44.973077  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:45.472782  399286 type.go:168] "Request Body" body=""
	I1206 10:45:45.472865  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:45.473241  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:45.473304  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:45.972826  399286 type.go:168] "Request Body" body=""
	I1206 10:45:45.972906  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:45.973262  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:46.473108  399286 type.go:168] "Request Body" body=""
	I1206 10:45:46.473196  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:46.473467  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:46.973436  399286 type.go:168] "Request Body" body=""
	I1206 10:45:46.973508  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:46.973863  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:47.472551  399286 type.go:168] "Request Body" body=""
	I1206 10:45:47.472626  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:47.472969  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:47.972653  399286 type.go:168] "Request Body" body=""
	I1206 10:45:47.972724  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:47.972985  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:47.973026  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:48.472555  399286 type.go:168] "Request Body" body=""
	I1206 10:45:48.472631  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:48.472979  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:48.972567  399286 type.go:168] "Request Body" body=""
	I1206 10:45:48.972648  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:48.973011  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:49.472610  399286 type.go:168] "Request Body" body=""
	I1206 10:45:49.472682  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:49.473011  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:49.972715  399286 type.go:168] "Request Body" body=""
	I1206 10:45:49.972814  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:49.973135  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:49.973192  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:50.472573  399286 type.go:168] "Request Body" body=""
	I1206 10:45:50.472649  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:50.473004  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:50.972718  399286 type.go:168] "Request Body" body=""
	I1206 10:45:50.972788  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:50.973064  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:51.472737  399286 type.go:168] "Request Body" body=""
	I1206 10:45:51.472812  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:51.473132  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:51.972870  399286 type.go:168] "Request Body" body=""
	I1206 10:45:51.972960  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:51.973314  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:51.973366  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:52.472505  399286 type.go:168] "Request Body" body=""
	I1206 10:45:52.472573  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:52.472847  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:52.972578  399286 type.go:168] "Request Body" body=""
	I1206 10:45:52.972662  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:52.973040  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:53.472618  399286 type.go:168] "Request Body" body=""
	I1206 10:45:53.472697  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:53.473027  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:53.600459  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:53.661736  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:53.665292  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:53.665323  399286 retry.go:31] will retry after 22.486760262s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:53.972617  399286 type.go:168] "Request Body" body=""
	I1206 10:45:53.972697  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:53.972964  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:54.472589  399286 type.go:168] "Request Body" body=""
	I1206 10:45:54.472671  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:54.473035  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:54.473093  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:54.972750  399286 type.go:168] "Request Body" body=""
	I1206 10:45:54.972837  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:54.973175  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:55.364722  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:55.425632  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:55.425678  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:55.425713  399286 retry.go:31] will retry after 12.507538253s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:55.472809  399286 type.go:168] "Request Body" body=""
	I1206 10:45:55.472887  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:55.473184  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:55.972552  399286 type.go:168] "Request Body" body=""
	I1206 10:45:55.972650  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:55.972997  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:56.472967  399286 type.go:168] "Request Body" body=""
	I1206 10:45:56.473058  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:56.473382  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:56.473432  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:56.973296  399286 type.go:168] "Request Body" body=""
	I1206 10:45:56.973367  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:56.973664  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:57.473479  399286 type.go:168] "Request Body" body=""
	I1206 10:45:57.473548  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:57.473911  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:57.972639  399286 type.go:168] "Request Body" body=""
	I1206 10:45:57.972714  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:57.973013  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:58.472518  399286 type.go:168] "Request Body" body=""
	I1206 10:45:58.472589  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:58.472883  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:58.972593  399286 type.go:168] "Request Body" body=""
	I1206 10:45:58.972667  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:58.973050  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:58.973107  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:59.472758  399286 type.go:168] "Request Body" body=""
	I1206 10:45:59.472833  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:59.473125  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:59.972559  399286 type.go:168] "Request Body" body=""
	I1206 10:45:59.972680  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:59.973026  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:00.472759  399286 type.go:168] "Request Body" body=""
	I1206 10:46:00.472862  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:00.473243  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:00.972535  399286 type.go:168] "Request Body" body=""
	I1206 10:46:00.972611  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:00.972922  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:01.472611  399286 type.go:168] "Request Body" body=""
	I1206 10:46:01.472687  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:01.473449  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:01.473514  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:01.973455  399286 type.go:168] "Request Body" body=""
	I1206 10:46:01.973535  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:01.973907  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:02.472594  399286 type.go:168] "Request Body" body=""
	I1206 10:46:02.472664  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:02.472938  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:02.972637  399286 type.go:168] "Request Body" body=""
	I1206 10:46:02.972721  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:02.973106  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:03.472580  399286 type.go:168] "Request Body" body=""
	I1206 10:46:03.472660  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:03.473047  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:03.972706  399286 type.go:168] "Request Body" body=""
	I1206 10:46:03.972777  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:03.973074  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:03.973125  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:04.472781  399286 type.go:168] "Request Body" body=""
	I1206 10:46:04.472867  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:04.473199  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:04.972938  399286 type.go:168] "Request Body" body=""
	I1206 10:46:04.973022  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:04.973345  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:05.473133  399286 type.go:168] "Request Body" body=""
	I1206 10:46:05.473203  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:05.473463  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:05.973200  399286 type.go:168] "Request Body" body=""
	I1206 10:46:05.973298  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:05.973625  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:05.973682  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:06.472493  399286 type.go:168] "Request Body" body=""
	I1206 10:46:06.472614  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:06.473110  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:06.972852  399286 type.go:168] "Request Body" body=""
	I1206 10:46:06.972929  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:06.973194  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:07.472870  399286 type.go:168] "Request Body" body=""
	I1206 10:46:07.472948  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:07.473250  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:07.933511  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:46:07.973023  399286 type.go:168] "Request Body" body=""
	I1206 10:46:07.973096  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:07.973373  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:07.994514  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:46:07.994569  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:46:07.994603  399286 retry.go:31] will retry after 24.706041433s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:46:08.473166  399286 type.go:168] "Request Body" body=""
	I1206 10:46:08.473240  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:08.473542  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:08.473592  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:08.973437  399286 type.go:168] "Request Body" body=""
	I1206 10:46:08.973586  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:08.973915  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:09.472565  399286 type.go:168] "Request Body" body=""
	I1206 10:46:09.472644  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:09.472997  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:09.972519  399286 type.go:168] "Request Body" body=""
	I1206 10:46:09.972596  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:09.972890  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:10.472617  399286 type.go:168] "Request Body" body=""
	I1206 10:46:10.472695  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:10.473054  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:10.972589  399286 type.go:168] "Request Body" body=""
	I1206 10:46:10.972671  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:10.973007  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:10.973060  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:11.472630  399286 type.go:168] "Request Body" body=""
	I1206 10:46:11.472699  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:11.473093  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:11.972992  399286 type.go:168] "Request Body" body=""
	I1206 10:46:11.973065  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:11.973384  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:12.472990  399286 type.go:168] "Request Body" body=""
	I1206 10:46:12.473133  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:12.473477  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:12.973203  399286 type.go:168] "Request Body" body=""
	I1206 10:46:12.973288  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:12.973547  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:12.973597  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:13.473422  399286 type.go:168] "Request Body" body=""
	I1206 10:46:13.473507  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:13.473830  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:13.972530  399286 type.go:168] "Request Body" body=""
	I1206 10:46:13.972634  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:13.972982  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:14.472697  399286 type.go:168] "Request Body" body=""
	I1206 10:46:14.472780  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:14.473132  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:14.972546  399286 type.go:168] "Request Body" body=""
	I1206 10:46:14.972620  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:14.972954  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:15.472543  399286 type.go:168] "Request Body" body=""
	I1206 10:46:15.472620  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:15.472950  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:15.473077  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:15.972516  399286 type.go:168] "Request Body" body=""
	I1206 10:46:15.972589  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:15.972880  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:16.153289  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:46:16.211194  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:46:16.214959  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:46:16.214991  399286 retry.go:31] will retry after 16.737835039s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:46:16.473494  399286 type.go:168] "Request Body" body=""
	I1206 10:46:16.473573  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:16.473903  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:16.972909  399286 type.go:168] "Request Body" body=""
	I1206 10:46:16.972986  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:16.973336  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:17.473112  399286 type.go:168] "Request Body" body=""
	I1206 10:46:17.473189  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:17.473465  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:17.473508  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:17.973266  399286 type.go:168] "Request Body" body=""
	I1206 10:46:17.973344  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:17.973710  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:18.472499  399286 type.go:168] "Request Body" body=""
	I1206 10:46:18.472586  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:18.472953  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:18.972650  399286 type.go:168] "Request Body" body=""
	I1206 10:46:18.972719  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:18.973068  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:19.472565  399286 type.go:168] "Request Body" body=""
	I1206 10:46:19.472638  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:19.472948  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:19.972573  399286 type.go:168] "Request Body" body=""
	I1206 10:46:19.972649  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:19.972985  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:19.973044  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:20.472488  399286 type.go:168] "Request Body" body=""
	I1206 10:46:20.472560  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:20.472892  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:20.972569  399286 type.go:168] "Request Body" body=""
	I1206 10:46:20.972648  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:20.973000  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:21.472659  399286 type.go:168] "Request Body" body=""
	I1206 10:46:21.472741  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:21.473075  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:21.972911  399286 type.go:168] "Request Body" body=""
	I1206 10:46:21.972985  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:21.973292  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:21.973342  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:22.473039  399286 type.go:168] "Request Body" body=""
	I1206 10:46:22.473118  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:22.473451  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:22.973318  399286 type.go:168] "Request Body" body=""
	I1206 10:46:22.973392  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:22.973733  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:23.472438  399286 type.go:168] "Request Body" body=""
	I1206 10:46:23.472509  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:23.472819  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:23.972535  399286 type.go:168] "Request Body" body=""
	I1206 10:46:23.972611  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:23.972929  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:24.472539  399286 type.go:168] "Request Body" body=""
	I1206 10:46:24.472621  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:24.472971  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:24.473042  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:24.972529  399286 type.go:168] "Request Body" body=""
	I1206 10:46:24.972598  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:24.972883  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:25.472559  399286 type.go:168] "Request Body" body=""
	I1206 10:46:25.472685  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:25.473033  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:25.972748  399286 type.go:168] "Request Body" body=""
	I1206 10:46:25.972833  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:25.973182  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:26.472983  399286 type.go:168] "Request Body" body=""
	I1206 10:46:26.473065  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:26.473364  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:26.473432  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:26.973366  399286 type.go:168] "Request Body" body=""
	I1206 10:46:26.973451  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:26.973797  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:27.472534  399286 type.go:168] "Request Body" body=""
	I1206 10:46:27.472617  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:27.472962  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:27.972662  399286 type.go:168] "Request Body" body=""
	I1206 10:46:27.972735  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:27.973174  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:28.472540  399286 type.go:168] "Request Body" body=""
	I1206 10:46:28.472653  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:28.472975  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:28.972588  399286 type.go:168] "Request Body" body=""
	I1206 10:46:28.972684  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:28.973062  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:28.973117  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:29.472612  399286 type.go:168] "Request Body" body=""
	I1206 10:46:29.472691  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:29.473027  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:29.972588  399286 type.go:168] "Request Body" body=""
	I1206 10:46:29.972665  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:29.973045  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:30.472639  399286 type.go:168] "Request Body" body=""
	I1206 10:46:30.472714  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:30.473023  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:30.972504  399286 type.go:168] "Request Body" body=""
	I1206 10:46:30.972575  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:30.972851  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:31.472534  399286 type.go:168] "Request Body" body=""
	I1206 10:46:31.472619  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:31.472983  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:31.473045  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:31.973254  399286 type.go:168] "Request Body" body=""
	I1206 10:46:31.973345  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:31.973713  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:32.472451  399286 type.go:168] "Request Body" body=""
	I1206 10:46:32.472529  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:32.472822  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:32.701368  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:46:32.764470  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:46:32.764520  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:46:32.764620  399286 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 10:46:32.953898  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:46:32.973480  399286 type.go:168] "Request Body" body=""
	I1206 10:46:32.973551  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:32.973819  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:33.013712  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:46:33.017430  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:46:33.017463  399286 retry.go:31] will retry after 30.205234164s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:46:33.472638  399286 type.go:168] "Request Body" body=""
	I1206 10:46:33.472723  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:33.473069  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:33.473124  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:33.972761  399286 type.go:168] "Request Body" body=""
	I1206 10:46:33.972847  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:33.973196  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:34.472595  399286 type.go:168] "Request Body" body=""
	I1206 10:46:34.472673  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:34.473015  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:34.972624  399286 type.go:168] "Request Body" body=""
	I1206 10:46:34.972712  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:34.973072  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:35.472562  399286 type.go:168] "Request Body" body=""
	I1206 10:46:35.472631  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:35.472898  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:35.972594  399286 type.go:168] "Request Body" body=""
	I1206 10:46:35.972691  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:35.973118  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:35.973178  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:36.472537  399286 type.go:168] "Request Body" body=""
	I1206 10:46:36.472612  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:36.472993  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:36.972887  399286 type.go:168] "Request Body" body=""
	I1206 10:46:36.972979  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:36.973257  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:37.472570  399286 type.go:168] "Request Body" body=""
	I1206 10:46:37.472647  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:37.473006  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:37.972720  399286 type.go:168] "Request Body" body=""
	I1206 10:46:37.972795  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:37.973159  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:37.973230  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:38.472827  399286 type.go:168] "Request Body" body=""
	I1206 10:46:38.472914  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:38.473245  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:38.972991  399286 type.go:168] "Request Body" body=""
	I1206 10:46:38.973109  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:38.973430  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:39.473083  399286 type.go:168] "Request Body" body=""
	I1206 10:46:39.473156  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:39.473487  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:39.973120  399286 type.go:168] "Request Body" body=""
	I1206 10:46:39.973195  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:39.973475  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:39.973520  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:40.473349  399286 type.go:168] "Request Body" body=""
	I1206 10:46:40.473426  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:40.473832  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:40.972536  399286 type.go:168] "Request Body" body=""
	I1206 10:46:40.972611  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:40.972967  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:41.472528  399286 type.go:168] "Request Body" body=""
	I1206 10:46:41.472606  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:41.472897  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:41.972839  399286 type.go:168] "Request Body" body=""
	I1206 10:46:41.972921  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:41.973277  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:42.473121  399286 type.go:168] "Request Body" body=""
	I1206 10:46:42.473205  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:42.473535  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:42.473598  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:42.973295  399286 type.go:168] "Request Body" body=""
	I1206 10:46:42.973369  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:42.973633  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:43.474561  399286 type.go:168] "Request Body" body=""
	I1206 10:46:43.474632  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:43.474989  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:43.972751  399286 type.go:168] "Request Body" body=""
	I1206 10:46:43.972830  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:43.973164  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:44.472525  399286 type.go:168] "Request Body" body=""
	I1206 10:46:44.472603  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:44.472924  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:44.972616  399286 type.go:168] "Request Body" body=""
	I1206 10:46:44.972690  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:44.972993  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:44.973041  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:45.472572  399286 type.go:168] "Request Body" body=""
	I1206 10:46:45.472654  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:45.473032  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:45.972746  399286 type.go:168] "Request Body" body=""
	I1206 10:46:45.972818  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:45.973081  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:46.473093  399286 type.go:168] "Request Body" body=""
	I1206 10:46:46.473169  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:46.473502  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:46.972476  399286 type.go:168] "Request Body" body=""
	I1206 10:46:46.972548  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:46.972884  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:47.472504  399286 type.go:168] "Request Body" body=""
	I1206 10:46:47.472574  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:47.472853  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:47.472901  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:47.972549  399286 type.go:168] "Request Body" body=""
	I1206 10:46:47.972622  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:47.972948  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:48.472667  399286 type.go:168] "Request Body" body=""
	I1206 10:46:48.472745  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:48.473110  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:48.972503  399286 type.go:168] "Request Body" body=""
	I1206 10:46:48.972577  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:48.972841  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:49.472552  399286 type.go:168] "Request Body" body=""
	I1206 10:46:49.472628  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:49.472955  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:49.473012  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:49.972575  399286 type.go:168] "Request Body" body=""
	I1206 10:46:49.972653  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:49.972977  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:50.472510  399286 type.go:168] "Request Body" body=""
	I1206 10:46:50.472585  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:50.472943  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:50.972627  399286 type.go:168] "Request Body" body=""
	I1206 10:46:50.972741  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:50.973101  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:51.472815  399286 type.go:168] "Request Body" body=""
	I1206 10:46:51.472914  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:51.473280  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:51.473354  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:51.973315  399286 type.go:168] "Request Body" body=""
	I1206 10:46:51.973390  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:51.973667  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:52.473496  399286 type.go:168] "Request Body" body=""
	I1206 10:46:52.473597  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:52.473928  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:52.972621  399286 type.go:168] "Request Body" body=""
	I1206 10:46:52.972697  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:52.973027  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:53.472511  399286 type.go:168] "Request Body" body=""
	I1206 10:46:53.472581  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:53.472850  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:53.972553  399286 type.go:168] "Request Body" body=""
	I1206 10:46:53.972631  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:53.973006  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:53.973079  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:54.472752  399286 type.go:168] "Request Body" body=""
	I1206 10:46:54.472832  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:54.473199  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:54.972887  399286 type.go:168] "Request Body" body=""
	I1206 10:46:54.972975  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:54.973260  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:55.472573  399286 type.go:168] "Request Body" body=""
	I1206 10:46:55.472649  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:55.473014  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:55.972568  399286 type.go:168] "Request Body" body=""
	I1206 10:46:55.972665  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:55.973053  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:55.973130  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:56.472795  399286 type.go:168] "Request Body" body=""
	I1206 10:46:56.472877  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:56.473146  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:56.972884  399286 type.go:168] "Request Body" body=""
	I1206 10:46:56.972967  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:56.973286  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:57.473158  399286 type.go:168] "Request Body" body=""
	I1206 10:46:57.473263  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:57.473709  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:57.973446  399286 type.go:168] "Request Body" body=""
	I1206 10:46:57.973526  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:57.973793  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:57.973842  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:58.472543  399286 type.go:168] "Request Body" body=""
	I1206 10:46:58.472635  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:58.473049  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:58.972820  399286 type.go:168] "Request Body" body=""
	I1206 10:46:58.972904  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:58.973235  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:59.472499  399286 type.go:168] "Request Body" body=""
	I1206 10:46:59.472588  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:59.472924  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:59.972544  399286 type.go:168] "Request Body" body=""
	I1206 10:46:59.972618  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:59.972934  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:00.472668  399286 type.go:168] "Request Body" body=""
	I1206 10:47:00.472777  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:00.473113  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:00.473167  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:00.972603  399286 type.go:168] "Request Body" body=""
	I1206 10:47:00.972685  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:00.973038  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:01.472631  399286 type.go:168] "Request Body" body=""
	I1206 10:47:01.472707  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:01.473290  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:01.973256  399286 type.go:168] "Request Body" body=""
	I1206 10:47:01.973331  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:01.973589  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:02.473484  399286 type.go:168] "Request Body" body=""
	I1206 10:47:02.473561  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:02.473896  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:02.473948  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:02.972573  399286 type.go:168] "Request Body" body=""
	I1206 10:47:02.972648  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:02.972960  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:03.223490  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:47:03.285940  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:47:03.285995  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:47:03.286078  399286 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 10:47:03.289299  399286 out.go:179] * Enabled addons: 
	I1206 10:47:03.293166  399286 addons.go:530] duration metric: took 1m56.818196786s for enable addons: enabled=[]
	I1206 10:47:03.473269  399286 type.go:168] "Request Body" body=""
	I1206 10:47:03.473338  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:03.473598  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:03.973428  399286 type.go:168] "Request Body" body=""
	I1206 10:47:03.973501  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:03.973827  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:04.472627  399286 type.go:168] "Request Body" body=""
	I1206 10:47:04.472722  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:04.473116  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:04.972512  399286 type.go:168] "Request Body" body=""
	I1206 10:47:04.972582  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:04.972864  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:04.972905  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:05.472854  399286 type.go:168] "Request Body" body=""
	I1206 10:47:05.472960  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:05.473416  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:05.973522  399286 type.go:168] "Request Body" body=""
	I1206 10:47:05.973609  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:05.973972  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:06.472918  399286 type.go:168] "Request Body" body=""
	I1206 10:47:06.472988  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:06.473277  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:06.973211  399286 type.go:168] "Request Body" body=""
	I1206 10:47:06.973295  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:06.973657  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:06.973714  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:07.473520  399286 type.go:168] "Request Body" body=""
	I1206 10:47:07.473603  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:07.473953  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:07.972629  399286 type.go:168] "Request Body" body=""
	I1206 10:47:07.972706  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:07.973057  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:08.472555  399286 type.go:168] "Request Body" body=""
	I1206 10:47:08.472632  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:08.472984  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:08.972724  399286 type.go:168] "Request Body" body=""
	I1206 10:47:08.972820  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:08.973196  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:09.472563  399286 type.go:168] "Request Body" body=""
	I1206 10:47:09.472642  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:09.472956  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:09.473017  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:09.972573  399286 type.go:168] "Request Body" body=""
	I1206 10:47:09.972651  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:09.973014  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:10.472724  399286 type.go:168] "Request Body" body=""
	I1206 10:47:10.472800  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:10.473133  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:10.972508  399286 type.go:168] "Request Body" body=""
	I1206 10:47:10.972579  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:10.972853  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:11.472590  399286 type.go:168] "Request Body" body=""
	I1206 10:47:11.472669  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:11.473026  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:11.473100  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:11.972903  399286 type.go:168] "Request Body" body=""
	I1206 10:47:11.972976  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:11.973268  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:12.472510  399286 type.go:168] "Request Body" body=""
	I1206 10:47:12.472587  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:12.472925  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:12.972540  399286 type.go:168] "Request Body" body=""
	I1206 10:47:12.972620  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:12.972953  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:13.472625  399286 type.go:168] "Request Body" body=""
	I1206 10:47:13.472698  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:13.473024  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:13.972681  399286 type.go:168] "Request Body" body=""
	I1206 10:47:13.972766  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:13.973081  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:13.973132  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:14.472631  399286 type.go:168] "Request Body" body=""
	I1206 10:47:14.472714  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:14.472985  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:14.972530  399286 type.go:168] "Request Body" body=""
	I1206 10:47:14.972629  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:14.972947  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:15.472648  399286 type.go:168] "Request Body" body=""
	I1206 10:47:15.472724  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:15.472994  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:15.972549  399286 type.go:168] "Request Body" body=""
	I1206 10:47:15.972625  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:15.972986  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:16.472737  399286 type.go:168] "Request Body" body=""
	I1206 10:47:16.472818  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:16.473143  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:16.473210  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:16.972875  399286 type.go:168] "Request Body" body=""
	I1206 10:47:16.972953  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:16.973285  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:17.473095  399286 type.go:168] "Request Body" body=""
	I1206 10:47:17.473172  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:17.473522  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:17.973337  399286 type.go:168] "Request Body" body=""
	I1206 10:47:17.973427  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:17.973777  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:18.473404  399286 type.go:168] "Request Body" body=""
	I1206 10:47:18.473476  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:18.473741  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:18.473792  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:18.972507  399286 type.go:168] "Request Body" body=""
	I1206 10:47:18.972607  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:18.972936  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:19.472645  399286 type.go:168] "Request Body" body=""
	I1206 10:47:19.472722  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:19.473093  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:19.972508  399286 type.go:168] "Request Body" body=""
	I1206 10:47:19.972581  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:19.972852  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:20.472581  399286 type.go:168] "Request Body" body=""
	I1206 10:47:20.472666  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:20.472982  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:20.972583  399286 type.go:168] "Request Body" body=""
	I1206 10:47:20.972663  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:20.973010  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:20.973076  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:21.472734  399286 type.go:168] "Request Body" body=""
	I1206 10:47:21.472809  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:21.473085  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:21.972896  399286 type.go:168] "Request Body" body=""
	I1206 10:47:21.972994  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:21.973342  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:22.473130  399286 type.go:168] "Request Body" body=""
	I1206 10:47:22.473211  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:22.473543  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:22.973319  399286 type.go:168] "Request Body" body=""
	I1206 10:47:22.973388  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:22.973646  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:22.973687  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:23.473432  399286 type.go:168] "Request Body" body=""
	I1206 10:47:23.473517  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:23.473913  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:23.972485  399286 type.go:168] "Request Body" body=""
	I1206 10:47:23.972564  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:23.972906  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:24.472560  399286 type.go:168] "Request Body" body=""
	I1206 10:47:24.472635  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:24.472973  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:24.972571  399286 type.go:168] "Request Body" body=""
	I1206 10:47:24.972646  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:24.973006  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:25.472733  399286 type.go:168] "Request Body" body=""
	I1206 10:47:25.472817  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:25.473171  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:25.473229  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:25.972533  399286 type.go:168] "Request Body" body=""
	I1206 10:47:25.972606  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:25.972871  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:26.472886  399286 type.go:168] "Request Body" body=""
	I1206 10:47:26.472960  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:26.473323  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:26.972934  399286 type.go:168] "Request Body" body=""
	I1206 10:47:26.973008  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:26.973356  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:27.473095  399286 type.go:168] "Request Body" body=""
	I1206 10:47:27.473172  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:27.473531  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:27.473599  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:27.973360  399286 type.go:168] "Request Body" body=""
	I1206 10:47:27.973441  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:27.973782  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:28.472538  399286 type.go:168] "Request Body" body=""
	I1206 10:47:28.472619  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:28.472967  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:28.972640  399286 type.go:168] "Request Body" body=""
	I1206 10:47:28.972710  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:28.973005  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:29.472569  399286 type.go:168] "Request Body" body=""
	I1206 10:47:29.472650  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:29.472987  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:29.972580  399286 type.go:168] "Request Body" body=""
	I1206 10:47:29.972672  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:29.973033  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:29.973089  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:30.472734  399286 type.go:168] "Request Body" body=""
	I1206 10:47:30.472805  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:30.473130  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:30.972629  399286 type.go:168] "Request Body" body=""
	I1206 10:47:30.972708  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:30.973010  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:31.472578  399286 type.go:168] "Request Body" body=""
	I1206 10:47:31.472654  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:31.472987  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:31.972822  399286 type.go:168] "Request Body" body=""
	I1206 10:47:31.972897  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:31.973160  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:31.973200  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:32.472914  399286 type.go:168] "Request Body" body=""
	I1206 10:47:32.473004  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:32.473347  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:32.973159  399286 type.go:168] "Request Body" body=""
	I1206 10:47:32.973233  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:32.973581  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:33.473360  399286 type.go:168] "Request Body" body=""
	I1206 10:47:33.473433  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:33.473718  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:33.973498  399286 type.go:168] "Request Body" body=""
	I1206 10:47:33.973577  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:33.973949  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:33.974029  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:34.472526  399286 type.go:168] "Request Body" body=""
	I1206 10:47:34.472608  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:34.472947  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:34.972533  399286 type.go:168] "Request Body" body=""
	I1206 10:47:34.972628  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:34.972989  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:35.472565  399286 type.go:168] "Request Body" body=""
	I1206 10:47:35.472646  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:35.473023  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:35.972740  399286 type.go:168] "Request Body" body=""
	I1206 10:47:35.972818  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:35.973158  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:36.473170  399286 type.go:168] "Request Body" body=""
	I1206 10:47:36.473254  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:36.473528  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:36.473569  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:36.972525  399286 type.go:168] "Request Body" body=""
	I1206 10:47:36.972602  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:36.972938  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:37.472576  399286 type.go:168] "Request Body" body=""
	I1206 10:47:37.472656  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:37.473000  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:37.972555  399286 type.go:168] "Request Body" body=""
	I1206 10:47:37.972627  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:37.972895  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:38.472579  399286 type.go:168] "Request Body" body=""
	I1206 10:47:38.472663  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:38.473008  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:38.972569  399286 type.go:168] "Request Body" body=""
	I1206 10:47:38.972649  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:38.973012  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:38.973070  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:39.472526  399286 type.go:168] "Request Body" body=""
	I1206 10:47:39.472602  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:39.472864  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:39.972558  399286 type.go:168] "Request Body" body=""
	I1206 10:47:39.972636  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:39.972965  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:40.472567  399286 type.go:168] "Request Body" body=""
	I1206 10:47:40.472639  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:40.472972  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:40.972512  399286 type.go:168] "Request Body" body=""
	I1206 10:47:40.972588  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:40.972883  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:41.472541  399286 type.go:168] "Request Body" body=""
	I1206 10:47:41.472626  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:41.472980  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:41.473038  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:41.972863  399286 type.go:168] "Request Body" body=""
	I1206 10:47:41.972939  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:41.973307  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:42.472600  399286 type.go:168] "Request Body" body=""
	I1206 10:47:42.472680  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:42.472974  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:42.972681  399286 type.go:168] "Request Body" body=""
	I1206 10:47:42.972759  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:42.973100  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:43.472601  399286 type.go:168] "Request Body" body=""
	I1206 10:47:43.472694  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:43.473056  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:43.473116  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:43.972500  399286 type.go:168] "Request Body" body=""
	I1206 10:47:43.972579  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:43.972899  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:44.472587  399286 type.go:168] "Request Body" body=""
	I1206 10:47:44.472675  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:44.473014  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:44.972574  399286 type.go:168] "Request Body" body=""
	I1206 10:47:44.972651  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:44.973031  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:45.472655  399286 type.go:168] "Request Body" body=""
	I1206 10:47:45.472726  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:45.473026  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:45.972718  399286 type.go:168] "Request Body" body=""
	I1206 10:47:45.972800  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:45.973152  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:45.973210  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:46.472509  399286 type.go:168] "Request Body" body=""
	I1206 10:47:46.472600  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:46.472959  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:46.972791  399286 type.go:168] "Request Body" body=""
	I1206 10:47:46.972860  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:46.973128  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:47.472798  399286 type.go:168] "Request Body" body=""
	I1206 10:47:47.472874  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:47.473208  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:47.972568  399286 type.go:168] "Request Body" body=""
	I1206 10:47:47.972648  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:47.972980  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:48.473410  399286 type.go:168] "Request Body" body=""
	I1206 10:47:48.473482  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:48.473747  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:48.473789  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:48.973486  399286 type.go:168] "Request Body" body=""
	I1206 10:47:48.973564  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:48.973890  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:49.472605  399286 type.go:168] "Request Body" body=""
	I1206 10:47:49.472724  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:49.473137  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:49.972518  399286 type.go:168] "Request Body" body=""
	I1206 10:47:49.972592  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:49.972867  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:50.472548  399286 type.go:168] "Request Body" body=""
	I1206 10:47:50.472628  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:50.472960  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:50.972581  399286 type.go:168] "Request Body" body=""
	I1206 10:47:50.972656  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:50.972999  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:50.973058  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:51.472529  399286 type.go:168] "Request Body" body=""
	I1206 10:47:51.472601  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:51.472873  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:51.972855  399286 type.go:168] "Request Body" body=""
	I1206 10:47:51.972934  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:51.973251  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:52.472527  399286 type.go:168] "Request Body" body=""
	I1206 10:47:52.472603  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:52.472925  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:52.972632  399286 type.go:168] "Request Body" body=""
	I1206 10:47:52.972710  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:52.973009  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:53.472551  399286 type.go:168] "Request Body" body=""
	I1206 10:47:53.472635  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:53.473004  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:53.473083  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:53.972778  399286 type.go:168] "Request Body" body=""
	I1206 10:47:53.972868  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:53.973278  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:54.472595  399286 type.go:168] "Request Body" body=""
	I1206 10:47:54.472680  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:54.473008  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:54.972544  399286 type.go:168] "Request Body" body=""
	I1206 10:47:54.972624  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:54.972997  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:55.472544  399286 type.go:168] "Request Body" body=""
	I1206 10:47:55.472633  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:55.472967  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:55.972686  399286 type.go:168] "Request Body" body=""
	I1206 10:47:55.972759  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:55.973084  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:55.973129  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:56.472527  399286 type.go:168] "Request Body" body=""
	I1206 10:47:56.472600  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:56.472935  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:56.972607  399286 type.go:168] "Request Body" body=""
	I1206 10:47:56.972688  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:56.973052  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:57.472495  399286 type.go:168] "Request Body" body=""
	I1206 10:47:57.472571  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:57.472885  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:57.972579  399286 type.go:168] "Request Body" body=""
	I1206 10:47:57.972653  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:57.972989  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:58.472576  399286 type.go:168] "Request Body" body=""
	I1206 10:47:58.472654  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:58.472981  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:58.473038  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:58.972524  399286 type.go:168] "Request Body" body=""
	I1206 10:47:58.972595  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:58.972920  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:59.472623  399286 type.go:168] "Request Body" body=""
	I1206 10:47:59.472702  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:59.473058  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:59.972774  399286 type.go:168] "Request Body" body=""
	I1206 10:47:59.972856  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:59.973198  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:00.472879  399286 type.go:168] "Request Body" body=""
	I1206 10:48:00.472963  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:00.473302  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:00.473350  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:00.973100  399286 type.go:168] "Request Body" body=""
	I1206 10:48:00.973182  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:00.973500  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:01.473348  399286 type.go:168] "Request Body" body=""
	I1206 10:48:01.473426  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:01.473749  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:01.972487  399286 type.go:168] "Request Body" body=""
	I1206 10:48:01.972565  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:01.972839  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:02.472524  399286 type.go:168] "Request Body" body=""
	I1206 10:48:02.472604  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:02.472916  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:02.972566  399286 type.go:168] "Request Body" body=""
	I1206 10:48:02.972640  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:02.972945  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:02.972990  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:03.472551  399286 type.go:168] "Request Body" body=""
	I1206 10:48:03.472641  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:03.472970  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:03.972530  399286 type.go:168] "Request Body" body=""
	I1206 10:48:03.972607  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:03.972945  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:04.472651  399286 type.go:168] "Request Body" body=""
	I1206 10:48:04.472730  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:04.473079  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:04.972502  399286 type.go:168] "Request Body" body=""
	I1206 10:48:04.972576  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:04.972860  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:05.472568  399286 type.go:168] "Request Body" body=""
	I1206 10:48:05.472646  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:05.473022  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:05.473077  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:05.972744  399286 type.go:168] "Request Body" body=""
	I1206 10:48:05.972834  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:05.973199  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:06.473241  399286 type.go:168] "Request Body" body=""
	I1206 10:48:06.473315  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:06.473604  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:06.972611  399286 type.go:168] "Request Body" body=""
	I1206 10:48:06.972691  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:06.972992  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:07.472569  399286 type.go:168] "Request Body" body=""
	I1206 10:48:07.472658  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:07.473030  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:07.972580  399286 type.go:168] "Request Body" body=""
	I1206 10:48:07.972659  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:07.972925  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:07.972966  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:08.472573  399286 type.go:168] "Request Body" body=""
	I1206 10:48:08.472665  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:08.472999  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:08.972712  399286 type.go:168] "Request Body" body=""
	I1206 10:48:08.972805  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:08.973106  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:09.472510  399286 type.go:168] "Request Body" body=""
	I1206 10:48:09.472584  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:09.472910  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:09.972623  399286 type.go:168] "Request Body" body=""
	I1206 10:48:09.972697  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:09.973067  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:09.973119  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:10.472815  399286 type.go:168] "Request Body" body=""
	I1206 10:48:10.472886  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:10.473224  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:10.972551  399286 type.go:168] "Request Body" body=""
	I1206 10:48:10.972627  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:10.972947  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:11.472597  399286 type.go:168] "Request Body" body=""
	I1206 10:48:11.472675  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:11.473029  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:11.972904  399286 type.go:168] "Request Body" body=""
	I1206 10:48:11.972981  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:11.973328  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:11.973383  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:12.472840  399286 type.go:168] "Request Body" body=""
	I1206 10:48:12.472917  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:12.473212  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:12.972545  399286 type.go:168] "Request Body" body=""
	I1206 10:48:12.972620  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:12.972959  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:13.472663  399286 type.go:168] "Request Body" body=""
	I1206 10:48:13.472740  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:13.473115  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:13.972809  399286 type.go:168] "Request Body" body=""
	I1206 10:48:13.972882  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:13.973148  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:14.472560  399286 type.go:168] "Request Body" body=""
	I1206 10:48:14.472633  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:14.472974  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:14.473026  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:14.972547  399286 type.go:168] "Request Body" body=""
	I1206 10:48:14.972633  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:14.972981  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:15.472494  399286 type.go:168] "Request Body" body=""
	I1206 10:48:15.472572  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:15.472888  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:15.972557  399286 type.go:168] "Request Body" body=""
	I1206 10:48:15.972632  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:15.973009  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:16.472796  399286 type.go:168] "Request Body" body=""
	I1206 10:48:16.472875  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:16.473235  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:16.473293  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:16.972964  399286 type.go:168] "Request Body" body=""
	I1206 10:48:16.973036  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:16.973307  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:17.473074  399286 type.go:168] "Request Body" body=""
	I1206 10:48:17.473147  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:17.473485  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:17.973295  399286 type.go:168] "Request Body" body=""
	I1206 10:48:17.973378  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:17.973725  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:18.473438  399286 type.go:168] "Request Body" body=""
	I1206 10:48:18.473505  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:18.473841  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:18.473920  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:18.972621  399286 type.go:168] "Request Body" body=""
	I1206 10:48:18.972695  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:18.973065  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:19.472598  399286 type.go:168] "Request Body" body=""
	I1206 10:48:19.472705  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:19.473114  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:19.972515  399286 type.go:168] "Request Body" body=""
	I1206 10:48:19.972585  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:19.972856  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:20.472548  399286 type.go:168] "Request Body" body=""
	I1206 10:48:20.472625  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:20.472958  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:20.972576  399286 type.go:168] "Request Body" body=""
	I1206 10:48:20.972660  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:20.973023  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:20.973083  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:21.472615  399286 type.go:168] "Request Body" body=""
	I1206 10:48:21.472686  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:21.472963  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:21.972979  399286 type.go:168] "Request Body" body=""
	I1206 10:48:21.973061  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:21.973404  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:22.473200  399286 type.go:168] "Request Body" body=""
	I1206 10:48:22.473283  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:22.473635  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:22.973360  399286 type.go:168] "Request Body" body=""
	I1206 10:48:22.973441  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:22.973782  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:22.973843  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:23.472499  399286 type.go:168] "Request Body" body=""
	I1206 10:48:23.472580  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:23.472916  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:23.972556  399286 type.go:168] "Request Body" body=""
	I1206 10:48:23.972636  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:23.972975  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:24.472447  399286 type.go:168] "Request Body" body=""
	I1206 10:48:24.472514  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:24.472774  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:24.972471  399286 type.go:168] "Request Body" body=""
	I1206 10:48:24.972545  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:24.972884  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:25.472578  399286 type.go:168] "Request Body" body=""
	I1206 10:48:25.472667  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:25.473021  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:25.473076  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:25.972593  399286 type.go:168] "Request Body" body=""
	I1206 10:48:25.972669  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:25.972945  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:26.472488  399286 type.go:168] "Request Body" body=""
	I1206 10:48:26.472562  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:26.472906  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:26.972576  399286 type.go:168] "Request Body" body=""
	I1206 10:48:26.972660  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:26.973014  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:27.472549  399286 type.go:168] "Request Body" body=""
	I1206 10:48:27.472616  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:27.472894  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:27.972571  399286 type.go:168] "Request Body" body=""
	I1206 10:48:27.972663  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:27.973010  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:27.973066  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:28.472755  399286 type.go:168] "Request Body" body=""
	I1206 10:48:28.472833  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:28.473174  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:28.972507  399286 type.go:168] "Request Body" body=""
	I1206 10:48:28.972584  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:28.972913  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:29.472601  399286 type.go:168] "Request Body" body=""
	I1206 10:48:29.472680  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:29.473059  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:29.972567  399286 type.go:168] "Request Body" body=""
	I1206 10:48:29.972642  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:29.972943  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:30.472524  399286 type.go:168] "Request Body" body=""
	I1206 10:48:30.472616  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:30.472907  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:30.472969  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:30.972572  399286 type.go:168] "Request Body" body=""
	I1206 10:48:30.972656  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:30.973011  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:31.472727  399286 type.go:168] "Request Body" body=""
	I1206 10:48:31.472811  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:31.473170  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:31.972852  399286 type.go:168] "Request Body" body=""
	I1206 10:48:31.972934  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:31.973257  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:32.472954  399286 type.go:168] "Request Body" body=""
	I1206 10:48:32.473032  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:32.473401  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:32.473457  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:32.973252  399286 type.go:168] "Request Body" body=""
	I1206 10:48:32.973327  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:32.973655  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:33.473425  399286 type.go:168] "Request Body" body=""
	I1206 10:48:33.473493  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:33.473760  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:33.972474  399286 type.go:168] "Request Body" body=""
	I1206 10:48:33.972572  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:33.972878  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:34.472627  399286 type.go:168] "Request Body" body=""
	I1206 10:48:34.472747  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:34.473114  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:34.972774  399286 type.go:168] "Request Body" body=""
	I1206 10:48:34.972854  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:34.973228  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:34.973282  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:35.472597  399286 type.go:168] "Request Body" body=""
	I1206 10:48:35.472674  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:35.473044  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:35.972740  399286 type.go:168] "Request Body" body=""
	I1206 10:48:35.972817  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:35.973175  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:36.473166  399286 type.go:168] "Request Body" body=""
	I1206 10:48:36.473240  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:36.473506  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:36.972504  399286 type.go:168] "Request Body" body=""
	I1206 10:48:36.972595  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:36.972967  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:37.472704  399286 type.go:168] "Request Body" body=""
	I1206 10:48:37.472781  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:37.473151  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:37.473210  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:37.972584  399286 type.go:168] "Request Body" body=""
	I1206 10:48:37.972659  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:37.972980  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:38.472674  399286 type.go:168] "Request Body" body=""
	I1206 10:48:38.472751  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:38.473096  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:38.972809  399286 type.go:168] "Request Body" body=""
	I1206 10:48:38.972894  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:38.973234  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:39.472512  399286 type.go:168] "Request Body" body=""
	I1206 10:48:39.472591  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:39.472888  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:39.972573  399286 type.go:168] "Request Body" body=""
	I1206 10:48:39.972654  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:39.973005  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:39.973062  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:40.472729  399286 type.go:168] "Request Body" body=""
	I1206 10:48:40.472807  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:40.473118  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:40.972789  399286 type.go:168] "Request Body" body=""
	I1206 10:48:40.972865  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:40.973175  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:41.472555  399286 type.go:168] "Request Body" body=""
	I1206 10:48:41.472632  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:41.473019  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:41.972805  399286 type.go:168] "Request Body" body=""
	I1206 10:48:41.972889  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:41.973231  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:41.973283  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:42.472645  399286 type.go:168] "Request Body" body=""
	I1206 10:48:42.472724  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:42.473014  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:42.972545  399286 type.go:168] "Request Body" body=""
	I1206 10:48:42.972620  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:42.972971  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:43.472630  399286 type.go:168] "Request Body" body=""
	I1206 10:48:43.472707  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:43.473070  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:43.972757  399286 type.go:168] "Request Body" body=""
	I1206 10:48:43.972837  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:43.973118  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:44.472549  399286 type.go:168] "Request Body" body=""
	I1206 10:48:44.472623  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:44.472965  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:44.473024  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:44.972697  399286 type.go:168] "Request Body" body=""
	I1206 10:48:44.972778  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:44.973152  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:45.472603  399286 type.go:168] "Request Body" body=""
	I1206 10:48:45.472680  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:45.472954  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:45.972650  399286 type.go:168] "Request Body" body=""
	I1206 10:48:45.972731  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:45.973088  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:46.472844  399286 type.go:168] "Request Body" body=""
	I1206 10:48:46.472930  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:46.473240  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:46.473286  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:46.972876  399286 type.go:168] "Request Body" body=""
	I1206 10:48:46.972956  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:46.973229  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:47.472914  399286 type.go:168] "Request Body" body=""
	I1206 10:48:47.472997  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:47.473351  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:47.973159  399286 type.go:168] "Request Body" body=""
	I1206 10:48:47.973238  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:47.973572  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:48.473300  399286 type.go:168] "Request Body" body=""
	I1206 10:48:48.473369  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:48.473631  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:48.473676  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:48.973423  399286 type.go:168] "Request Body" body=""
	I1206 10:48:48.973495  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:48.973838  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:49.472583  399286 type.go:168] "Request Body" body=""
	I1206 10:48:49.472657  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:49.472982  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:49.972501  399286 type.go:168] "Request Body" body=""
	I1206 10:48:49.972577  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:49.972905  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:50.472592  399286 type.go:168] "Request Body" body=""
	I1206 10:48:50.472663  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:50.473004  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:50.972730  399286 type.go:168] "Request Body" body=""
	I1206 10:48:50.972813  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:50.973202  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:50.973262  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:51.472848  399286 type.go:168] "Request Body" body=""
	I1206 10:48:51.472917  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:51.473221  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:51.973010  399286 type.go:168] "Request Body" body=""
	I1206 10:48:51.973086  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:51.973413  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:52.473222  399286 type.go:168] "Request Body" body=""
	I1206 10:48:52.473300  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:52.473671  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:52.973437  399286 type.go:168] "Request Body" body=""
	I1206 10:48:52.973506  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:52.973775  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:52.973815  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:53.472497  399286 type.go:168] "Request Body" body=""
	I1206 10:48:53.472572  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:53.472897  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:53.972574  399286 type.go:168] "Request Body" body=""
	I1206 10:48:53.972662  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:53.973051  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:54.472686  399286 type.go:168] "Request Body" body=""
	I1206 10:48:54.472759  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:54.473019  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:54.972701  399286 type.go:168] "Request Body" body=""
	I1206 10:48:54.972840  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:54.973196  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:55.472921  399286 type.go:168] "Request Body" body=""
	I1206 10:48:55.473005  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:55.473348  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:55.473405  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:55.973139  399286 type.go:168] "Request Body" body=""
	I1206 10:48:55.973209  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:55.973524  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:56.473496  399286 type.go:168] "Request Body" body=""
	I1206 10:48:56.473586  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:56.473930  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:56.973058  399286 type.go:168] "Request Body" body=""
	I1206 10:48:56.973167  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:56.973523  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:57.473253  399286 type.go:168] "Request Body" body=""
	I1206 10:48:57.473322  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:57.473613  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:57.473655  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:57.973400  399286 type.go:168] "Request Body" body=""
	I1206 10:48:57.973472  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:57.973805  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:58.472543  399286 type.go:168] "Request Body" body=""
	I1206 10:48:58.472624  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:58.472965  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:58.972545  399286 type.go:168] "Request Body" body=""
	I1206 10:48:58.972612  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:58.972871  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:59.472552  399286 type.go:168] "Request Body" body=""
	I1206 10:48:59.472628  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:59.472962  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:59.972576  399286 type.go:168] "Request Body" body=""
	I1206 10:48:59.972655  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:59.973033  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:59.973088  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:00.472755  399286 type.go:168] "Request Body" body=""
	I1206 10:49:00.472825  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:00.473159  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:00.972543  399286 type.go:168] "Request Body" body=""
	I1206 10:49:00.972623  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:00.972982  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:01.472701  399286 type.go:168] "Request Body" body=""
	I1206 10:49:01.472779  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:01.473107  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:01.972890  399286 type.go:168] "Request Body" body=""
	I1206 10:49:01.972966  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:01.973305  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:01.973365  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:02.473122  399286 type.go:168] "Request Body" body=""
	I1206 10:49:02.473195  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:02.473527  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:02.973299  399286 type.go:168] "Request Body" body=""
	I1206 10:49:02.973372  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:02.973717  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:03.473484  399286 type.go:168] "Request Body" body=""
	I1206 10:49:03.473561  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:03.473909  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:03.972610  399286 type.go:168] "Request Body" body=""
	I1206 10:49:03.972692  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:03.972999  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:04.472572  399286 type.go:168] "Request Body" body=""
	I1206 10:49:04.472655  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:04.473009  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:04.473069  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:04.972630  399286 type.go:168] "Request Body" body=""
	I1206 10:49:04.972705  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:04.973016  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:05.472726  399286 type.go:168] "Request Body" body=""
	I1206 10:49:05.472816  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:05.473184  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:05.972911  399286 type.go:168] "Request Body" body=""
	I1206 10:49:05.972991  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:05.973382  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:06.473278  399286 type.go:168] "Request Body" body=""
	I1206 10:49:06.473354  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:06.473638  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:06.473679  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:06.972552  399286 type.go:168] "Request Body" body=""
	I1206 10:49:06.972644  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:06.972984  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:07.472897  399286 type.go:168] "Request Body" body=""
	I1206 10:49:07.472974  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:07.473313  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:07.973068  399286 type.go:168] "Request Body" body=""
	I1206 10:49:07.973145  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:07.973511  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:08.473278  399286 type.go:168] "Request Body" body=""
	I1206 10:49:08.473354  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:08.473693  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:08.473753  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:08.972455  399286 type.go:168] "Request Body" body=""
	I1206 10:49:08.972536  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:08.972879  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:09.472560  399286 type.go:168] "Request Body" body=""
	I1206 10:49:09.472627  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:09.472882  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:09.972560  399286 type.go:168] "Request Body" body=""
	I1206 10:49:09.972633  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:09.972993  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:10.472707  399286 type.go:168] "Request Body" body=""
	I1206 10:49:10.472787  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:10.473125  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:10.972519  399286 type.go:168] "Request Body" body=""
	I1206 10:49:10.972593  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:10.972923  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:10.972987  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:11.472665  399286 type.go:168] "Request Body" body=""
	I1206 10:49:11.472741  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:11.473101  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:11.972857  399286 type.go:168] "Request Body" body=""
	I1206 10:49:11.972931  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:11.973257  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:12.472588  399286 type.go:168] "Request Body" body=""
	I1206 10:49:12.472664  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:12.472989  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:12.972552  399286 type.go:168] "Request Body" body=""
	I1206 10:49:12.972628  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:12.972971  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:12.973026  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:13.472582  399286 type.go:168] "Request Body" body=""
	I1206 10:49:13.472659  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:13.473010  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:13.972527  399286 type.go:168] "Request Body" body=""
	I1206 10:49:13.972603  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:13.972984  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:14.472669  399286 type.go:168] "Request Body" body=""
	I1206 10:49:14.472750  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:14.473111  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:14.972837  399286 type.go:168] "Request Body" body=""
	I1206 10:49:14.972917  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:14.973262  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:14.973321  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:15.472591  399286 type.go:168] "Request Body" body=""
	I1206 10:49:15.472663  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:15.472956  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:15.972563  399286 type.go:168] "Request Body" body=""
	I1206 10:49:15.972639  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:15.972991  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:16.472531  399286 type.go:168] "Request Body" body=""
	I1206 10:49:16.472616  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:16.472996  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:16.972771  399286 type.go:168] "Request Body" body=""
	I1206 10:49:16.972841  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:16.973118  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:17.472567  399286 type.go:168] "Request Body" body=""
	I1206 10:49:17.472648  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:17.472996  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:17.473050  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:17.972717  399286 type.go:168] "Request Body" body=""
	I1206 10:49:17.972792  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:17.973099  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:18.472513  399286 type.go:168] "Request Body" body=""
	I1206 10:49:18.472587  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:18.472880  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:18.972548  399286 type.go:168] "Request Body" body=""
	I1206 10:49:18.972624  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:18.972945  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:19.472558  399286 type.go:168] "Request Body" body=""
	I1206 10:49:19.472639  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:19.472985  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:19.972653  399286 type.go:168] "Request Body" body=""
	I1206 10:49:19.972729  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:19.973082  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:19.973145  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:20.472552  399286 type.go:168] "Request Body" body=""
	I1206 10:49:20.472627  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:20.472962  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:20.972548  399286 type.go:168] "Request Body" body=""
	I1206 10:49:20.972633  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:20.972967  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:21.472516  399286 type.go:168] "Request Body" body=""
	I1206 10:49:21.472590  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:21.472909  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:21.972841  399286 type.go:168] "Request Body" body=""
	I1206 10:49:21.972916  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:21.973264  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:21.973322  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:22.473122  399286 type.go:168] "Request Body" body=""
	I1206 10:49:22.473197  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:22.473559  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:22.973364  399286 type.go:168] "Request Body" body=""
	I1206 10:49:22.973440  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:22.973787  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:23.472556  399286 type.go:168] "Request Body" body=""
	I1206 10:49:23.472643  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:23.472989  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:23.972688  399286 type.go:168] "Request Body" body=""
	I1206 10:49:23.972770  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:23.973107  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:24.472802  399286 type.go:168] "Request Body" body=""
	I1206 10:49:24.472877  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:24.473193  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:24.473243  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:24.972584  399286 type.go:168] "Request Body" body=""
	I1206 10:49:24.972665  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:24.973023  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:25.472744  399286 type.go:168] "Request Body" body=""
	I1206 10:49:25.472827  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:25.473187  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:25.972875  399286 type.go:168] "Request Body" body=""
	I1206 10:49:25.972942  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:25.973212  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:26.473315  399286 type.go:168] "Request Body" body=""
	I1206 10:49:26.473401  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:26.473746  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:26.473798  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:26.972480  399286 type.go:168] "Request Body" body=""
	I1206 10:49:26.972564  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:26.972910  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:27.472459  399286 type.go:168] "Request Body" body=""
	I1206 10:49:27.472532  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:27.472791  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:27.972488  399286 type.go:168] "Request Body" body=""
	I1206 10:49:27.972566  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:27.972886  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:28.472548  399286 type.go:168] "Request Body" body=""
	I1206 10:49:28.472623  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:28.472959  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:28.972639  399286 type.go:168] "Request Body" body=""
	I1206 10:49:28.972711  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:28.973000  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:28.973048  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:29.472558  399286 type.go:168] "Request Body" body=""
	I1206 10:49:29.472637  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:29.472984  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:29.972552  399286 type.go:168] "Request Body" body=""
	I1206 10:49:29.972639  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:29.972980  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:30.472653  399286 type.go:168] "Request Body" body=""
	I1206 10:49:30.472729  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:30.473004  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:30.972581  399286 type.go:168] "Request Body" body=""
	I1206 10:49:30.972663  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:30.972997  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:31.472551  399286 type.go:168] "Request Body" body=""
	I1206 10:49:31.472633  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:31.472995  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:31.473053  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:31.972765  399286 type.go:168] "Request Body" body=""
	I1206 10:49:31.972832  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:31.973098  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:32.472547  399286 type.go:168] "Request Body" body=""
	I1206 10:49:32.472631  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:32.473016  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:32.972568  399286 type.go:168] "Request Body" body=""
	I1206 10:49:32.972645  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:32.972982  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:33.472517  399286 type.go:168] "Request Body" body=""
	I1206 10:49:33.472591  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:33.472911  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:33.972503  399286 type.go:168] "Request Body" body=""
	I1206 10:49:33.972576  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:33.972901  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:33.972964  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:34.472657  399286 type.go:168] "Request Body" body=""
	I1206 10:49:34.472734  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:34.473129  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:34.972818  399286 type.go:168] "Request Body" body=""
	I1206 10:49:34.972889  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:34.973175  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:35.472878  399286 type.go:168] "Request Body" body=""
	I1206 10:49:35.472955  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:35.473329  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:35.973094  399286 type.go:168] "Request Body" body=""
	I1206 10:49:35.973174  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:35.973494  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:35.973549  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:36.472432  399286 type.go:168] "Request Body" body=""
	I1206 10:49:36.472505  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:36.472781  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:36.972828  399286 type.go:168] "Request Body" body=""
	I1206 10:49:36.972905  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:36.973252  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:37.472563  399286 type.go:168] "Request Body" body=""
	I1206 10:49:37.472637  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:37.472994  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:37.972683  399286 type.go:168] "Request Body" body=""
	I1206 10:49:37.972763  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:37.973077  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:38.472554  399286 type.go:168] "Request Body" body=""
	I1206 10:49:38.472633  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:38.472969  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:38.473033  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:38.972729  399286 type.go:168] "Request Body" body=""
	I1206 10:49:38.972808  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:38.973142  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:39.472497  399286 type.go:168] "Request Body" body=""
	I1206 10:49:39.472571  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:39.472854  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:39.972565  399286 type.go:168] "Request Body" body=""
	I1206 10:49:39.972639  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:39.972983  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:40.472566  399286 type.go:168] "Request Body" body=""
	I1206 10:49:40.472647  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:40.472966  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:40.972679  399286 type.go:168] "Request Body" body=""
	I1206 10:49:40.972760  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:40.973065  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:40.973121  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:41.472568  399286 type.go:168] "Request Body" body=""
	I1206 10:49:41.472657  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:41.473025  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:41.972922  399286 type.go:168] "Request Body" body=""
	I1206 10:49:41.972998  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:41.973339  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:42.473053  399286 type.go:168] "Request Body" body=""
	I1206 10:49:42.473124  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:42.473408  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:42.973276  399286 type.go:168] "Request Body" body=""
	I1206 10:49:42.973355  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:42.973694  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:42.973752  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:43.473503  399286 type.go:168] "Request Body" body=""
	I1206 10:49:43.473574  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:43.473918  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:43.972599  399286 type.go:168] "Request Body" body=""
	I1206 10:49:43.972670  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:43.973000  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:44.472572  399286 type.go:168] "Request Body" body=""
	I1206 10:49:44.472649  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:44.472982  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:44.972582  399286 type.go:168] "Request Body" body=""
	I1206 10:49:44.972670  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:44.973019  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:45.472513  399286 type.go:168] "Request Body" body=""
	I1206 10:49:45.472583  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:45.472857  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:45.472901  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:45.972615  399286 type.go:168] "Request Body" body=""
	I1206 10:49:45.972695  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:45.973015  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:46.472495  399286 type.go:168] "Request Body" body=""
	I1206 10:49:46.472575  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:46.472931  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:46.972626  399286 type.go:168] "Request Body" body=""
	I1206 10:49:46.974621  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:46.974930  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:47.472626  399286 type.go:168] "Request Body" body=""
	I1206 10:49:47.472726  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:47.473070  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:47.473127  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:47.972571  399286 type.go:168] "Request Body" body=""
	I1206 10:49:47.972667  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:47.972989  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:48.472520  399286 type.go:168] "Request Body" body=""
	I1206 10:49:48.472595  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:48.472920  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:48.972585  399286 type.go:168] "Request Body" body=""
	I1206 10:49:48.972669  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:48.973030  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:49.472592  399286 type.go:168] "Request Body" body=""
	I1206 10:49:49.472671  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:49.473006  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:49.972683  399286 type.go:168] "Request Body" body=""
	I1206 10:49:49.972754  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:49.973062  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:49.973108  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:50.472567  399286 type.go:168] "Request Body" body=""
	I1206 10:49:50.472644  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:50.472979  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:50.972718  399286 type.go:168] "Request Body" body=""
	I1206 10:49:50.972795  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:50.973180  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:51.472470  399286 type.go:168] "Request Body" body=""
	I1206 10:49:51.472554  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:51.472819  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:51.972847  399286 type.go:168] "Request Body" body=""
	I1206 10:49:51.972931  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:51.973379  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:51.973433  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:52.473217  399286 type.go:168] "Request Body" body=""
	I1206 10:49:52.473304  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:52.473657  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:52.973426  399286 type.go:168] "Request Body" body=""
	I1206 10:49:52.973497  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:52.973879  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:53.472575  399286 type.go:168] "Request Body" body=""
	I1206 10:49:53.472654  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:53.473006  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:53.972732  399286 type.go:168] "Request Body" body=""
	I1206 10:49:53.972810  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:53.973150  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:54.472486  399286 type.go:168] "Request Body" body=""
	I1206 10:49:54.472557  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:54.472823  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:54.472866  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:54.972537  399286 type.go:168] "Request Body" body=""
	I1206 10:49:54.972613  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:54.972990  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:55.472700  399286 type.go:168] "Request Body" body=""
	I1206 10:49:55.472773  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:55.473120  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:55.972590  399286 type.go:168] "Request Body" body=""
	I1206 10:49:55.972662  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:55.972932  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:56.472845  399286 type.go:168] "Request Body" body=""
	I1206 10:49:56.472928  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:56.473307  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:56.473367  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:56.973002  399286 type.go:168] "Request Body" body=""
	I1206 10:49:56.973079  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:56.973419  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:57.473158  399286 type.go:168] "Request Body" body=""
	I1206 10:49:57.473232  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:57.473497  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:57.973292  399286 type.go:168] "Request Body" body=""
	I1206 10:49:57.973367  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:57.973704  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:58.473494  399286 type.go:168] "Request Body" body=""
	I1206 10:49:58.473568  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:58.473902  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:58.473963  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:58.972557  399286 type.go:168] "Request Body" body=""
	I1206 10:49:58.972629  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:58.972908  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:59.472542  399286 type.go:168] "Request Body" body=""
	I1206 10:49:59.472622  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:59.472961  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:59.972698  399286 type.go:168] "Request Body" body=""
	I1206 10:49:59.972792  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:59.973143  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:00.475191  399286 type.go:168] "Request Body" body=""
	I1206 10:50:00.475305  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:00.475740  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:00.475791  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:00.972513  399286 type.go:168] "Request Body" body=""
	I1206 10:50:00.972593  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:00.972946  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:01.472607  399286 type.go:168] "Request Body" body=""
	I1206 10:50:01.472698  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:01.473000  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:01.972891  399286 type.go:168] "Request Body" body=""
	I1206 10:50:01.972964  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:01.973246  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:02.473133  399286 type.go:168] "Request Body" body=""
	I1206 10:50:02.473209  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:02.473541  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:02.973359  399286 type.go:168] "Request Body" body=""
	I1206 10:50:02.973436  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:02.973744  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:02.973794  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:03.472437  399286 type.go:168] "Request Body" body=""
	I1206 10:50:03.472515  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:03.472786  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:03.972517  399286 type.go:168] "Request Body" body=""
	I1206 10:50:03.972617  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:03.972974  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:04.472690  399286 type.go:168] "Request Body" body=""
	I1206 10:50:04.472770  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:04.473092  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:04.972613  399286 type.go:168] "Request Body" body=""
	I1206 10:50:04.972688  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:04.973025  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:05.472589  399286 type.go:168] "Request Body" body=""
	I1206 10:50:05.472662  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:05.472985  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:05.473041  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:05.972699  399286 type.go:168] "Request Body" body=""
	I1206 10:50:05.972774  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:05.973134  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:06.472872  399286 type.go:168] "Request Body" body=""
	I1206 10:50:06.472950  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:06.473229  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:06.973317  399286 type.go:168] "Request Body" body=""
	I1206 10:50:06.973399  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:06.973730  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:07.472451  399286 type.go:168] "Request Body" body=""
	I1206 10:50:07.472529  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:07.472886  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:07.972570  399286 type.go:168] "Request Body" body=""
	I1206 10:50:07.972651  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:07.972971  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:07.973027  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:08.472555  399286 type.go:168] "Request Body" body=""
	I1206 10:50:08.472629  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:08.472968  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:08.972685  399286 type.go:168] "Request Body" body=""
	I1206 10:50:08.972768  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:08.973126  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:09.472810  399286 type.go:168] "Request Body" body=""
	I1206 10:50:09.472880  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:09.473152  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:09.972581  399286 type.go:168] "Request Body" body=""
	I1206 10:50:09.972655  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:09.973005  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:09.973065  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:10.472745  399286 type.go:168] "Request Body" body=""
	I1206 10:50:10.472826  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:10.473165  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:10.972520  399286 type.go:168] "Request Body" body=""
	I1206 10:50:10.972626  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:10.972896  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:11.472575  399286 type.go:168] "Request Body" body=""
	I1206 10:50:11.472653  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:11.472988  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:11.972973  399286 type.go:168] "Request Body" body=""
	I1206 10:50:11.973057  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:11.973408  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:11.973457  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:12.473192  399286 type.go:168] "Request Body" body=""
	I1206 10:50:12.473263  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:12.473529  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:12.973277  399286 type.go:168] "Request Body" body=""
	I1206 10:50:12.973358  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:12.973714  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:13.473443  399286 type.go:168] "Request Body" body=""
	I1206 10:50:13.473531  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:13.473882  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:13.972569  399286 type.go:168] "Request Body" body=""
	I1206 10:50:13.972666  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:13.972977  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:14.472568  399286 type.go:168] "Request Body" body=""
	I1206 10:50:14.472647  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:14.472975  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:14.473038  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:14.972576  399286 type.go:168] "Request Body" body=""
	I1206 10:50:14.972652  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:14.972994  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:15.472548  399286 type.go:168] "Request Body" body=""
	I1206 10:50:15.472616  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:15.472889  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:15.972575  399286 type.go:168] "Request Body" body=""
	I1206 10:50:15.972652  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:15.972980  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:16.472885  399286 type.go:168] "Request Body" body=""
	I1206 10:50:16.472971  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:16.473326  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:16.473380  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:16.973027  399286 type.go:168] "Request Body" body=""
	I1206 10:50:16.973096  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:16.973374  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:17.473136  399286 type.go:168] "Request Body" body=""
	I1206 10:50:17.473212  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:17.473562  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:17.973242  399286 type.go:168] "Request Body" body=""
	I1206 10:50:17.973317  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:17.973682  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:18.473432  399286 type.go:168] "Request Body" body=""
	I1206 10:50:18.473500  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:18.473770  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:18.473813  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:18.972497  399286 type.go:168] "Request Body" body=""
	I1206 10:50:18.972578  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:18.972916  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:19.472629  399286 type.go:168] "Request Body" body=""
	I1206 10:50:19.472708  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:19.473031  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:19.972542  399286 type.go:168] "Request Body" body=""
	I1206 10:50:19.972615  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:19.972928  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:20.472572  399286 type.go:168] "Request Body" body=""
	I1206 10:50:20.472675  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:20.473027  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:20.972608  399286 type.go:168] "Request Body" body=""
	I1206 10:50:20.972691  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:20.973039  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:20.973093  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:21.472736  399286 type.go:168] "Request Body" body=""
	I1206 10:50:21.472814  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:21.473165  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:21.972862  399286 type.go:168] "Request Body" body=""
	I1206 10:50:21.972940  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:21.973280  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:22.473081  399286 type.go:168] "Request Body" body=""
	I1206 10:50:22.473165  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:22.473518  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:22.973293  399286 type.go:168] "Request Body" body=""
	I1206 10:50:22.973363  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:22.973736  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:22.973785  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:23.472452  399286 type.go:168] "Request Body" body=""
	I1206 10:50:23.472536  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:23.472892  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:23.972617  399286 type.go:168] "Request Body" body=""
	I1206 10:50:23.972696  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:23.973045  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:24.472734  399286 type.go:168] "Request Body" body=""
	I1206 10:50:24.472803  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:24.473087  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:24.972771  399286 type.go:168] "Request Body" body=""
	I1206 10:50:24.972846  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:24.973213  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:25.472766  399286 type.go:168] "Request Body" body=""
	I1206 10:50:25.472842  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:25.473168  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:25.473226  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:25.972592  399286 type.go:168] "Request Body" body=""
	I1206 10:50:25.972661  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:25.972949  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:26.472514  399286 type.go:168] "Request Body" body=""
	I1206 10:50:26.472594  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:26.472931  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:26.972899  399286 type.go:168] "Request Body" body=""
	I1206 10:50:26.972973  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:26.973261  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:27.473424  399286 type.go:168] "Request Body" body=""
	I1206 10:50:27.473500  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:27.473765  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:27.473815  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:27.972509  399286 type.go:168] "Request Body" body=""
	I1206 10:50:27.972592  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:27.972936  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:28.472486  399286 type.go:168] "Request Body" body=""
	I1206 10:50:28.472562  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:28.472923  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:28.972440  399286 type.go:168] "Request Body" body=""
	I1206 10:50:28.972512  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:28.972780  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:29.472546  399286 type.go:168] "Request Body" body=""
	I1206 10:50:29.472624  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:29.472988  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:29.972566  399286 type.go:168] "Request Body" body=""
	I1206 10:50:29.972650  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:29.972976  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:29.973030  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:30.472526  399286 type.go:168] "Request Body" body=""
	I1206 10:50:30.472595  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:30.472867  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:30.972538  399286 type.go:168] "Request Body" body=""
	I1206 10:50:30.972619  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:30.972967  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:31.472532  399286 type.go:168] "Request Body" body=""
	I1206 10:50:31.472614  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:31.472943  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:31.972822  399286 type.go:168] "Request Body" body=""
	I1206 10:50:31.972898  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:31.973163  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:31.973204  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:32.472846  399286 type.go:168] "Request Body" body=""
	I1206 10:50:32.472938  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:32.473300  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:32.973154  399286 type.go:168] "Request Body" body=""
	I1206 10:50:32.973228  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:32.973551  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:33.473237  399286 type.go:168] "Request Body" body=""
	I1206 10:50:33.473313  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:33.473581  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:33.973393  399286 type.go:168] "Request Body" body=""
	I1206 10:50:33.973465  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:33.973800  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:33.973854  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:34.472558  399286 type.go:168] "Request Body" body=""
	I1206 10:50:34.472637  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:34.472972  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:34.972519  399286 type.go:168] "Request Body" body=""
	I1206 10:50:34.972599  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:34.972924  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:35.472616  399286 type.go:168] "Request Body" body=""
	I1206 10:50:35.472695  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:35.473014  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:35.972579  399286 type.go:168] "Request Body" body=""
	I1206 10:50:35.972655  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:35.973034  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:36.472776  399286 type.go:168] "Request Body" body=""
	I1206 10:50:36.472844  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:36.473149  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:36.473213  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:36.972912  399286 type.go:168] "Request Body" body=""
	I1206 10:50:36.972989  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:36.973334  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:37.473153  399286 type.go:168] "Request Body" body=""
	I1206 10:50:37.473234  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:37.473545  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:37.973320  399286 type.go:168] "Request Body" body=""
	I1206 10:50:37.973389  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:37.973719  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:38.473508  399286 type.go:168] "Request Body" body=""
	I1206 10:50:38.473585  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:38.473917  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:38.473974  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:38.972671  399286 type.go:168] "Request Body" body=""
	I1206 10:50:38.972749  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:38.973130  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:39.472813  399286 type.go:168] "Request Body" body=""
	I1206 10:50:39.472890  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:39.473190  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:39.972557  399286 type.go:168] "Request Body" body=""
	I1206 10:50:39.972649  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:39.972986  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:40.472571  399286 type.go:168] "Request Body" body=""
	I1206 10:50:40.472658  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:40.472975  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:40.972515  399286 type.go:168] "Request Body" body=""
	I1206 10:50:40.972584  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:40.972892  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:40.972940  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:41.472601  399286 type.go:168] "Request Body" body=""
	I1206 10:50:41.472684  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:41.473063  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:41.972896  399286 type.go:168] "Request Body" body=""
	I1206 10:50:41.972981  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:41.973322  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:42.472688  399286 type.go:168] "Request Body" body=""
	I1206 10:50:42.472753  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:42.473021  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:42.972603  399286 type.go:168] "Request Body" body=""
	I1206 10:50:42.972678  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:42.973024  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:42.973077  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:43.472713  399286 type.go:168] "Request Body" body=""
	I1206 10:50:43.472795  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:43.473163  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:43.972566  399286 type.go:168] "Request Body" body=""
	I1206 10:50:43.972641  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:43.972943  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:44.472578  399286 type.go:168] "Request Body" body=""
	I1206 10:50:44.472651  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:44.472949  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:44.972603  399286 type.go:168] "Request Body" body=""
	I1206 10:50:44.972680  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:44.973055  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:44.973112  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:45.472760  399286 type.go:168] "Request Body" body=""
	I1206 10:50:45.472833  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:45.473168  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:45.972582  399286 type.go:168] "Request Body" body=""
	I1206 10:50:45.972658  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:45.972953  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:46.472685  399286 type.go:168] "Request Body" body=""
	I1206 10:50:46.472772  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:46.473240  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:46.972953  399286 type.go:168] "Request Body" body=""
	I1206 10:50:46.973034  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:46.973311  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:46.973368  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:47.473089  399286 type.go:168] "Request Body" body=""
	I1206 10:50:47.473162  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:47.473495  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:47.973337  399286 type.go:168] "Request Body" body=""
	I1206 10:50:47.973414  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:47.973765  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:48.472461  399286 type.go:168] "Request Body" body=""
	I1206 10:50:48.472532  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:48.472800  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:48.972484  399286 type.go:168] "Request Body" body=""
	I1206 10:50:48.972555  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:48.972856  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:49.472591  399286 type.go:168] "Request Body" body=""
	I1206 10:50:49.472674  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:49.472971  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:49.473017  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:49.972486  399286 type.go:168] "Request Body" body=""
	I1206 10:50:49.972555  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:49.972818  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:50.472595  399286 type.go:168] "Request Body" body=""
	I1206 10:50:50.472673  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:50.473012  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:50.972610  399286 type.go:168] "Request Body" body=""
	I1206 10:50:50.972682  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:50.973033  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:51.472648  399286 type.go:168] "Request Body" body=""
	I1206 10:50:51.472722  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:51.473053  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:51.473104  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:51.972927  399286 type.go:168] "Request Body" body=""
	I1206 10:50:51.973006  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:51.973306  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:52.473170  399286 type.go:168] "Request Body" body=""
	I1206 10:50:52.473264  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:52.473614  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:52.973403  399286 type.go:168] "Request Body" body=""
	I1206 10:50:52.973483  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:52.973779  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:53.472504  399286 type.go:168] "Request Body" body=""
	I1206 10:50:53.472613  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:53.472956  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:53.972692  399286 type.go:168] "Request Body" body=""
	I1206 10:50:53.972766  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:53.973130  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:53.973190  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:54.472528  399286 type.go:168] "Request Body" body=""
	I1206 10:50:54.472607  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:54.472878  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:54.972605  399286 type.go:168] "Request Body" body=""
	I1206 10:50:54.972688  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:54.973068  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:55.472818  399286 type.go:168] "Request Body" body=""
	I1206 10:50:55.472895  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:55.473202  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:55.972520  399286 type.go:168] "Request Body" body=""
	I1206 10:50:55.972603  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:55.972935  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:56.472654  399286 type.go:168] "Request Body" body=""
	I1206 10:50:56.472729  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:56.473032  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:56.473084  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:56.972842  399286 type.go:168] "Request Body" body=""
	I1206 10:50:56.972920  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:56.973318  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:57.473075  399286 type.go:168] "Request Body" body=""
	I1206 10:50:57.473143  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:57.473455  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:57.973290  399286 type.go:168] "Request Body" body=""
	I1206 10:50:57.973373  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:57.973726  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:58.472463  399286 type.go:168] "Request Body" body=""
	I1206 10:50:58.472542  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:58.472877  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:58.972566  399286 type.go:168] "Request Body" body=""
	I1206 10:50:58.972641  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:58.972980  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:58.973033  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:59.472570  399286 type.go:168] "Request Body" body=""
	I1206 10:50:59.472643  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:59.472941  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:59.972571  399286 type.go:168] "Request Body" body=""
	I1206 10:50:59.972657  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:59.973000  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:00.472549  399286 type.go:168] "Request Body" body=""
	I1206 10:51:00.472645  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:00.473014  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:00.972576  399286 type.go:168] "Request Body" body=""
	I1206 10:51:00.972652  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:00.972971  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:01.472606  399286 type.go:168] "Request Body" body=""
	I1206 10:51:01.472680  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:01.473017  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:51:01.473078  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:51:01.972762  399286 type.go:168] "Request Body" body=""
	I1206 10:51:01.972832  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:01.973108  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:02.472577  399286 type.go:168] "Request Body" body=""
	I1206 10:51:02.472675  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:02.473037  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:02.972793  399286 type.go:168] "Request Body" body=""
	I1206 10:51:02.972870  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:02.973217  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:03.472909  399286 type.go:168] "Request Body" body=""
	I1206 10:51:03.472990  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:03.473316  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:51:03.473367  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:51:03.973130  399286 type.go:168] "Request Body" body=""
	I1206 10:51:03.973216  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:03.973569  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:04.473254  399286 type.go:168] "Request Body" body=""
	I1206 10:51:04.473335  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:04.473708  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:04.973449  399286 type.go:168] "Request Body" body=""
	I1206 10:51:04.973548  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:04.973831  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:05.472562  399286 type.go:168] "Request Body" body=""
	I1206 10:51:05.472640  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:05.472982  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:05.972586  399286 type.go:168] "Request Body" body=""
	I1206 10:51:05.972670  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:05.973021  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:51:05.973092  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:51:06.472600  399286 type.go:168] "Request Body" body=""
	I1206 10:51:06.472689  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:06.473022  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:06.972909  399286 type.go:168] "Request Body" body=""
	I1206 10:51:06.972998  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:06.973336  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:07.473123  399286 type.go:168] "Request Body" body=""
	I1206 10:51:07.473186  399286 node_ready.go:38] duration metric: took 6m0.000853216s for node "functional-196950" to be "Ready" ...
	I1206 10:51:07.476374  399286 out.go:203] 
	W1206 10:51:07.479349  399286 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1206 10:51:07.479391  399286 out.go:285] * 
	W1206 10:51:07.481554  399286 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:51:07.484691  399286 out.go:203] 
	
	
	==> CRI-O <==
	Dec 06 10:51:16 functional-196950 crio[5345]: time="2025-12-06T10:51:16.681090489Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=a6644682-b155-4975-9b4d-b3a25826b669 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:51:17 functional-196950 crio[5345]: time="2025-12-06T10:51:17.759167896Z" level=info msg="Checking image status: minikube-local-cache-test:functional-196950" id=de398f8f-3930-4a58-b990-1c3c5536fc97 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:51:17 functional-196950 crio[5345]: time="2025-12-06T10:51:17.759341149Z" level=info msg="Resolving \"minikube-local-cache-test\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 06 10:51:17 functional-196950 crio[5345]: time="2025-12-06T10:51:17.759414823Z" level=info msg="Image minikube-local-cache-test:functional-196950 not found" id=de398f8f-3930-4a58-b990-1c3c5536fc97 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:51:17 functional-196950 crio[5345]: time="2025-12-06T10:51:17.759504638Z" level=info msg="Neither image nor artfiact minikube-local-cache-test:functional-196950 found" id=de398f8f-3930-4a58-b990-1c3c5536fc97 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:51:17 functional-196950 crio[5345]: time="2025-12-06T10:51:17.800436231Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-196950" id=3839a3f9-7a20-49dc-a450-400650f95d3e name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:51:17 functional-196950 crio[5345]: time="2025-12-06T10:51:17.800583005Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-196950 not found" id=3839a3f9-7a20-49dc-a450-400650f95d3e name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:51:17 functional-196950 crio[5345]: time="2025-12-06T10:51:17.800624844Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-196950 found" id=3839a3f9-7a20-49dc-a450-400650f95d3e name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:51:17 functional-196950 crio[5345]: time="2025-12-06T10:51:17.825690578Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-196950" id=7bfa5ab4-ee23-448e-8c7c-6013d4d125e1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:51:17 functional-196950 crio[5345]: time="2025-12-06T10:51:17.825854969Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-196950 not found" id=7bfa5ab4-ee23-448e-8c7c-6013d4d125e1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:51:17 functional-196950 crio[5345]: time="2025-12-06T10:51:17.82590516Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-196950 found" id=7bfa5ab4-ee23-448e-8c7c-6013d4d125e1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:51:18 functional-196950 crio[5345]: time="2025-12-06T10:51:18.834420397Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=b1480b3b-5d55-4e0e-a195-dda8a56ce4fe name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:51:19 functional-196950 crio[5345]: time="2025-12-06T10:51:19.174640107Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=3e7de2ab-8f6d-4650-af37-a5bf48c4f78c name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:51:19 functional-196950 crio[5345]: time="2025-12-06T10:51:19.174865644Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=3e7de2ab-8f6d-4650-af37-a5bf48c4f78c name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:51:19 functional-196950 crio[5345]: time="2025-12-06T10:51:19.174940475Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=3e7de2ab-8f6d-4650-af37-a5bf48c4f78c name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:51:19 functional-196950 crio[5345]: time="2025-12-06T10:51:19.752464757Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=f100257a-1b76-48a9-9d71-34fd29259970 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:51:19 functional-196950 crio[5345]: time="2025-12-06T10:51:19.752617653Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=f100257a-1b76-48a9-9d71-34fd29259970 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:51:19 functional-196950 crio[5345]: time="2025-12-06T10:51:19.752655446Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=f100257a-1b76-48a9-9d71-34fd29259970 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:51:19 functional-196950 crio[5345]: time="2025-12-06T10:51:19.778059202Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=6b4c32e1-a492-4fc8-b625-6d4935efa3a7 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:51:19 functional-196950 crio[5345]: time="2025-12-06T10:51:19.778214847Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=6b4c32e1-a492-4fc8-b625-6d4935efa3a7 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:51:19 functional-196950 crio[5345]: time="2025-12-06T10:51:19.77825968Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=6b4c32e1-a492-4fc8-b625-6d4935efa3a7 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:51:19 functional-196950 crio[5345]: time="2025-12-06T10:51:19.805594859Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=fc342d1a-0b15-4db6-884f-13091ec17c55 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:51:19 functional-196950 crio[5345]: time="2025-12-06T10:51:19.805730762Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=fc342d1a-0b15-4db6-884f-13091ec17c55 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:51:19 functional-196950 crio[5345]: time="2025-12-06T10:51:19.805765478Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=fc342d1a-0b15-4db6-884f-13091ec17c55 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:51:20 functional-196950 crio[5345]: time="2025-12-06T10:51:20.475486781Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=f433a295-3c0b-4836-ab7c-00cce7815750 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:51:22.070774    9391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:51:22.071411    9391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:51:22.073157    9391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:51:22.073819    9391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:51:22.075599    9391 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014752] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503231] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.065820] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.901896] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.944603] kauditd_printk_skb: 39 callbacks suppressed
	[Dec 6 09:04] hrtimer: interrupt took 32230230 ns
	[Dec 6 10:25] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 6 10:26] overlayfs: idmapped layers are currently not supported
	[  +0.066821] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 6 10:32] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:33] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:51] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 10:51:22 up  2:33,  0 user,  load average: 0.69, 0.38, 0.88
	Linux functional-196950 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 10:51:19 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:51:19 functional-196950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 826.
	Dec 06 10:51:19 functional-196950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:19 functional-196950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:19 functional-196950 kubelet[9242]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:51:19 functional-196950 kubelet[9242]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:51:19 functional-196950 kubelet[9242]: E1206 10:51:19.940538    9242 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:51:19 functional-196950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:51:19 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:51:20 functional-196950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 827.
	Dec 06 10:51:20 functional-196950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:20 functional-196950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:20 functional-196950 kubelet[9285]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:51:20 functional-196950 kubelet[9285]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:51:20 functional-196950 kubelet[9285]: E1206 10:51:20.786758    9285 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:51:20 functional-196950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:51:20 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:51:21 functional-196950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 828.
	Dec 06 10:51:21 functional-196950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:21 functional-196950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:21 functional-196950 kubelet[9305]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:51:21 functional-196950 kubelet[9305]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:51:21 functional-196950 kubelet[9305]: E1206 10:51:21.544373    9305 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:51:21 functional-196950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:51:21 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-196950 -n functional-196950
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-196950 -n functional-196950: exit status 2 (364.460803ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-196950" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-196950 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-196950 get pods: exit status 1 (110.859238ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-196950 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-196950
helpers_test.go:243: (dbg) docker inspect functional-196950:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1",
	        "Created": "2025-12-06T10:36:45.201779678Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 393848,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T10:36:45.318229053Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/hostname",
	        "HostsPath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/hosts",
	        "LogPath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1-json.log",
	        "Name": "/functional-196950",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-196950:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-196950",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1",
	                "LowerDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1-init/diff:/var/lib/docker/overlay2/5011226d55616c9977b14c1fe617d1302fe59373df05ce8ec6e21b79143a1c57/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-196950",
	                "Source": "/var/lib/docker/volumes/functional-196950/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-196950",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-196950",
	                "name.minikube.sigs.k8s.io": "functional-196950",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9b8f961d55d7529aed7b841f2ac9f818c22ff12b8ad73f2d6bcee22656d9749a",
	            "SandboxKey": "/var/run/docker/netns/9b8f961d55d7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33158"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-196950": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:c1:40:2a:93:47",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a566bfdfd33a868cf61e5b18b36cbd55e9868f24cbb091e055ae606aeb8c6f03",
	                    "EndpointID": "452fe32bde0c42c4c35d700488ae93aeecc6c6a971ac6f1a8a492dbc4b328ed9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-196950",
	                        "d150aac7296d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-196950 -n functional-196950
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-196950 -n functional-196950: exit status 2 (332.074416ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-196950 logs -n 25: (1.1179476s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-205266 image ls --format short --alsologtostderr                                                                                       │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image   │ functional-205266 image ls --format yaml --alsologtostderr                                                                                        │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh     │ functional-205266 ssh pgrep buildkitd                                                                                                             │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │                     │
	│ image   │ functional-205266 image ls --format json --alsologtostderr                                                                                        │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image   │ functional-205266 image build -t localhost/my-image:functional-205266 testdata/build --alsologtostderr                                            │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image   │ functional-205266 image ls --format table --alsologtostderr                                                                                       │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image   │ functional-205266 image ls                                                                                                                        │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ delete  │ -p functional-205266                                                                                                                              │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ start   │ -p functional-196950 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │                     │
	│ start   │ -p functional-196950 --alsologtostderr -v=8                                                                                                       │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:45 UTC │                     │
	│ cache   │ functional-196950 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ cache   │ functional-196950 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ cache   │ functional-196950 cache add registry.k8s.io/pause:latest                                                                                          │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ cache   │ functional-196950 cache add minikube-local-cache-test:functional-196950                                                                           │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ cache   │ functional-196950 cache delete minikube-local-cache-test:functional-196950                                                                        │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ ssh     │ functional-196950 ssh sudo crictl images                                                                                                          │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ ssh     │ functional-196950 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ ssh     │ functional-196950 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │                     │
	│ cache   │ functional-196950 cache reload                                                                                                                    │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ ssh     │ functional-196950 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ kubectl │ functional-196950 kubectl -- --context functional-196950 get pods                                                                                 │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:45:01
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:45:01.787203  399286 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:45:01.787433  399286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:45:01.787467  399286 out.go:374] Setting ErrFile to fd 2...
	I1206 10:45:01.787489  399286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:45:01.787778  399286 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 10:45:01.788186  399286 out.go:368] Setting JSON to false
	I1206 10:45:01.789151  399286 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8853,"bootTime":1765009049,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 10:45:01.789259  399286 start.go:143] virtualization:  
	I1206 10:45:01.792729  399286 out.go:179] * [functional-196950] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:45:01.796494  399286 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 10:45:01.796574  399286 notify.go:221] Checking for updates...
	I1206 10:45:01.802323  399286 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:45:01.805290  399286 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 10:45:01.808768  399286 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-362985/.minikube
	I1206 10:45:01.811515  399286 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:45:01.814379  399286 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:45:01.817672  399286 config.go:182] Loaded profile config "functional-196950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:45:01.817798  399286 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:45:01.851887  399286 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:45:01.852009  399286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:45:01.921321  399286 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:45:01.909571102 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:45:01.921426  399286 docker.go:319] overlay module found
	I1206 10:45:01.926314  399286 out.go:179] * Using the docker driver based on existing profile
	I1206 10:45:01.929149  399286 start.go:309] selected driver: docker
	I1206 10:45:01.929174  399286 start.go:927] validating driver "docker" against &{Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:45:01.929299  399286 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:45:01.929402  399286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:45:02.005684  399286 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 10:45:01.991905909 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:45:02.006178  399286 cni.go:84] Creating CNI manager for ""
	I1206 10:45:02.006252  399286 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:45:02.006308  399286 start.go:353] cluster config:
	{Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:45:02.012455  399286 out.go:179] * Starting "functional-196950" primary control-plane node in "functional-196950" cluster
	I1206 10:45:02.015293  399286 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 10:45:02.018502  399286 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:45:02.021547  399286 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 10:45:02.021609  399286 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1206 10:45:02.021620  399286 cache.go:65] Caching tarball of preloaded images
	I1206 10:45:02.021746  399286 preload.go:238] Found /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1206 10:45:02.021762  399286 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1206 10:45:02.021883  399286 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/config.json ...
	I1206 10:45:02.022120  399286 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:45:02.058171  399286 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 10:45:02.058196  399286 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 10:45:02.058216  399286 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:45:02.058248  399286 start.go:360] acquireMachinesLock for functional-196950: {Name:mkd2471f275d1d2a438cb4ce89f1d1521a0fb340 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:45:02.058324  399286 start.go:364] duration metric: took 51.241µs to acquireMachinesLock for "functional-196950"
	I1206 10:45:02.058347  399286 start.go:96] Skipping create...Using existing machine configuration
	I1206 10:45:02.058352  399286 fix.go:54] fixHost starting: 
	I1206 10:45:02.058623  399286 cli_runner.go:164] Run: docker container inspect functional-196950 --format={{.State.Status}}
	I1206 10:45:02.075952  399286 fix.go:112] recreateIfNeeded on functional-196950: state=Running err=<nil>
	W1206 10:45:02.075984  399286 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 10:45:02.079219  399286 out.go:252] * Updating the running docker "functional-196950" container ...
	I1206 10:45:02.079261  399286 machine.go:94] provisionDockerMachine start ...
	I1206 10:45:02.079396  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:02.097606  399286 main.go:143] libmachine: Using SSH client type: native
	I1206 10:45:02.097945  399286 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1206 10:45:02.097963  399286 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:45:02.251117  399286 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-196950
	
	I1206 10:45:02.251145  399286 ubuntu.go:182] provisioning hostname "functional-196950"
	I1206 10:45:02.251226  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:02.270896  399286 main.go:143] libmachine: Using SSH client type: native
	I1206 10:45:02.271293  399286 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1206 10:45:02.271357  399286 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-196950 && echo "functional-196950" | sudo tee /etc/hostname
	I1206 10:45:02.434988  399286 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-196950
	
	I1206 10:45:02.435098  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:02.453713  399286 main.go:143] libmachine: Using SSH client type: native
	I1206 10:45:02.454033  399286 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1206 10:45:02.454055  399286 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-196950' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-196950/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-196950' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:45:02.607868  399286 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:45:02.607903  399286 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-362985/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-362985/.minikube}
	I1206 10:45:02.607940  399286 ubuntu.go:190] setting up certificates
	I1206 10:45:02.607949  399286 provision.go:84] configureAuth start
	I1206 10:45:02.608015  399286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-196950
	I1206 10:45:02.626134  399286 provision.go:143] copyHostCerts
	I1206 10:45:02.626186  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem
	I1206 10:45:02.626227  399286 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem, removing ...
	I1206 10:45:02.626247  399286 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem
	I1206 10:45:02.626323  399286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem (1082 bytes)
	I1206 10:45:02.626456  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem
	I1206 10:45:02.626477  399286 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem, removing ...
	I1206 10:45:02.626487  399286 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem
	I1206 10:45:02.626523  399286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem (1123 bytes)
	I1206 10:45:02.626584  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem
	I1206 10:45:02.626607  399286 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem, removing ...
	I1206 10:45:02.626611  399286 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem
	I1206 10:45:02.626634  399286 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem (1679 bytes)
	I1206 10:45:02.626683  399286 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem org=jenkins.functional-196950 san=[127.0.0.1 192.168.49.2 functional-196950 localhost minikube]
	I1206 10:45:02.961448  399286 provision.go:177] copyRemoteCerts
	I1206 10:45:02.961531  399286 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:45:02.961575  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:02.978755  399286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:45:03.095893  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1206 10:45:03.095982  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 10:45:03.114611  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1206 10:45:03.114706  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 10:45:03.135133  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1206 10:45:03.135195  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 10:45:03.153562  399286 provision.go:87] duration metric: took 545.588133ms to configureAuth
	I1206 10:45:03.153601  399286 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:45:03.153843  399286 config.go:182] Loaded profile config "functional-196950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:45:03.153992  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:03.171946  399286 main.go:143] libmachine: Using SSH client type: native
	I1206 10:45:03.172256  399286 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1206 10:45:03.172279  399286 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 10:45:03.524489  399286 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 10:45:03.524512  399286 machine.go:97] duration metric: took 1.445242076s to provisionDockerMachine
	I1206 10:45:03.524523  399286 start.go:293] postStartSetup for "functional-196950" (driver="docker")
	I1206 10:45:03.524536  399286 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:45:03.524603  399286 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:45:03.524644  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:03.555449  399286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:45:03.668233  399286 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:45:03.672046  399286 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1206 10:45:03.672068  399286 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1206 10:45:03.672073  399286 command_runner.go:130] > VERSION_ID="12"
	I1206 10:45:03.672078  399286 command_runner.go:130] > VERSION="12 (bookworm)"
	I1206 10:45:03.672084  399286 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1206 10:45:03.672087  399286 command_runner.go:130] > ID=debian
	I1206 10:45:03.672092  399286 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1206 10:45:03.672114  399286 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1206 10:45:03.672130  399286 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1206 10:45:03.672206  399286 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:45:03.672228  399286 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:45:03.672240  399286 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-362985/.minikube/addons for local assets ...
	I1206 10:45:03.672300  399286 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-362985/.minikube/files for local assets ...
	I1206 10:45:03.672390  399286 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem -> 3648552.pem in /etc/ssl/certs
	I1206 10:45:03.672402  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem -> /etc/ssl/certs/3648552.pem
	I1206 10:45:03.672481  399286 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/test/nested/copy/364855/hosts -> hosts in /etc/test/nested/copy/364855
	I1206 10:45:03.672489  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/test/nested/copy/364855/hosts -> /etc/test/nested/copy/364855/hosts
	I1206 10:45:03.672536  399286 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/364855
	I1206 10:45:03.681376  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem --> /etc/ssl/certs/3648552.pem (1708 bytes)
	I1206 10:45:03.700845  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/test/nested/copy/364855/hosts --> /etc/test/nested/copy/364855/hosts (40 bytes)
	I1206 10:45:03.720695  399286 start.go:296] duration metric: took 196.153156ms for postStartSetup
	I1206 10:45:03.720782  399286 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:45:03.720851  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:03.739871  399286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:45:03.844136  399286 command_runner.go:130] > 11%
	I1206 10:45:03.844709  399286 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:45:03.849387  399286 command_runner.go:130] > 174G
	I1206 10:45:03.849978  399286 fix.go:56] duration metric: took 1.791620292s for fixHost
	I1206 10:45:03.850000  399286 start.go:83] releasing machines lock for "functional-196950", held for 1.791664797s
	I1206 10:45:03.850077  399286 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-196950
	I1206 10:45:03.867785  399286 ssh_runner.go:195] Run: cat /version.json
	I1206 10:45:03.867838  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:03.868113  399286 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:45:03.868167  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:03.886546  399286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:45:03.911694  399286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:45:03.991370  399286 command_runner.go:130] > {"iso_version": "v1.37.0-1763503576-21924", "kicbase_version": "v0.0.48-1764843390-22032", "minikube_version": "v1.37.0", "commit": "d7bfd7d6d80c3eeb1d6cf1c5f081f8642bc1997e"}
	I1206 10:45:03.991537  399286 ssh_runner.go:195] Run: systemctl --version
	I1206 10:45:04.088215  399286 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1206 10:45:04.091250  399286 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1206 10:45:04.091291  399286 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1206 10:45:04.091431  399286 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 10:45:04.130964  399286 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1206 10:45:04.136249  399286 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1206 10:45:04.136293  399286 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:45:04.136352  399286 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:45:04.145113  399286 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 10:45:04.145182  399286 start.go:496] detecting cgroup driver to use...
	I1206 10:45:04.145222  399286 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:45:04.145282  399286 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 10:45:04.161420  399286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 10:45:04.175205  399286 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:45:04.175315  399286 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:45:04.191496  399286 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:45:04.205243  399286 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:45:04.349911  399286 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:45:04.470887  399286 docker.go:234] disabling docker service ...
	I1206 10:45:04.471006  399286 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:45:04.486933  399286 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:45:04.500707  399286 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:45:04.632842  399286 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:45:04.756279  399286 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:45:04.770461  399286 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:45:04.785365  399286 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1206 10:45:04.786482  399286 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 10:45:04.786596  399286 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:45:04.796852  399286 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 10:45:04.796980  399286 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:45:04.806654  399286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:45:04.816002  399286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:45:04.825576  399286 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:45:04.834547  399286 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:45:04.844889  399286 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:45:04.854032  399286 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:45:04.863103  399286 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:45:04.870297  399286 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1206 10:45:04.871475  399286 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:45:04.879247  399286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:45:04.992959  399286 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 10:45:05.192927  399286 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 10:45:05.193085  399286 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 10:45:05.197937  399286 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1206 10:45:05.197964  399286 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1206 10:45:05.197971  399286 command_runner.go:130] > Device: 0,72	Inode: 1640        Links: 1
	I1206 10:45:05.197987  399286 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 10:45:05.198031  399286 command_runner.go:130] > Access: 2025-12-06 10:45:05.125759427 +0000
	I1206 10:45:05.198049  399286 command_runner.go:130] > Modify: 2025-12-06 10:45:05.125759427 +0000
	I1206 10:45:05.198060  399286 command_runner.go:130] > Change: 2025-12-06 10:45:05.125759427 +0000
	I1206 10:45:05.198063  399286 command_runner.go:130] >  Birth: -
	I1206 10:45:05.198081  399286 start.go:564] Will wait 60s for crictl version
	I1206 10:45:05.198158  399286 ssh_runner.go:195] Run: which crictl
	I1206 10:45:05.202333  399286 command_runner.go:130] > /usr/local/bin/crictl
	I1206 10:45:05.202451  399286 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:45:05.227773  399286 command_runner.go:130] > Version:  0.1.0
	I1206 10:45:05.227855  399286 command_runner.go:130] > RuntimeName:  cri-o
	I1206 10:45:05.227876  399286 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1206 10:45:05.227895  399286 command_runner.go:130] > RuntimeApiVersion:  v1
	I1206 10:45:05.230308  399286 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 10:45:05.230460  399286 ssh_runner.go:195] Run: crio --version
	I1206 10:45:05.261871  399286 command_runner.go:130] > crio version 1.34.3
	I1206 10:45:05.261971  399286 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1206 10:45:05.261992  399286 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1206 10:45:05.262014  399286 command_runner.go:130] >    GitTreeState:   dirty
	I1206 10:45:05.262045  399286 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1206 10:45:05.262062  399286 command_runner.go:130] >    GoVersion:      go1.24.6
	I1206 10:45:05.262083  399286 command_runner.go:130] >    Compiler:       gc
	I1206 10:45:05.262102  399286 command_runner.go:130] >    Platform:       linux/arm64
	I1206 10:45:05.262141  399286 command_runner.go:130] >    Linkmode:       static
	I1206 10:45:05.262176  399286 command_runner.go:130] >    BuildTags:
	I1206 10:45:05.262192  399286 command_runner.go:130] >      static
	I1206 10:45:05.262229  399286 command_runner.go:130] >      netgo
	I1206 10:45:05.262248  399286 command_runner.go:130] >      osusergo
	I1206 10:45:05.262264  399286 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1206 10:45:05.262283  399286 command_runner.go:130] >      seccomp
	I1206 10:45:05.262317  399286 command_runner.go:130] >      apparmor
	I1206 10:45:05.262335  399286 command_runner.go:130] >      selinux
	I1206 10:45:05.262352  399286 command_runner.go:130] >    LDFlags:          unknown
	I1206 10:45:05.262371  399286 command_runner.go:130] >    SeccompEnabled:   true
	I1206 10:45:05.262402  399286 command_runner.go:130] >    AppArmorEnabled:  false
	I1206 10:45:05.263735  399286 ssh_runner.go:195] Run: crio --version
	I1206 10:45:05.292275  399286 command_runner.go:130] > crio version 1.34.3
	I1206 10:45:05.292350  399286 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1206 10:45:05.292370  399286 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1206 10:45:05.292389  399286 command_runner.go:130] >    GitTreeState:   dirty
	I1206 10:45:05.292419  399286 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1206 10:45:05.292445  399286 command_runner.go:130] >    GoVersion:      go1.24.6
	I1206 10:45:05.292464  399286 command_runner.go:130] >    Compiler:       gc
	I1206 10:45:05.292484  399286 command_runner.go:130] >    Platform:       linux/arm64
	I1206 10:45:05.292510  399286 command_runner.go:130] >    Linkmode:       static
	I1206 10:45:05.292529  399286 command_runner.go:130] >    BuildTags:
	I1206 10:45:05.292548  399286 command_runner.go:130] >      static
	I1206 10:45:05.292577  399286 command_runner.go:130] >      netgo
	I1206 10:45:05.292594  399286 command_runner.go:130] >      osusergo
	I1206 10:45:05.292622  399286 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1206 10:45:05.292652  399286 command_runner.go:130] >      seccomp
	I1206 10:45:05.292669  399286 command_runner.go:130] >      apparmor
	I1206 10:45:05.292692  399286 command_runner.go:130] >      selinux
	I1206 10:45:05.292731  399286 command_runner.go:130] >    LDFlags:          unknown
	I1206 10:45:05.292749  399286 command_runner.go:130] >    SeccompEnabled:   true
	I1206 10:45:05.292767  399286 command_runner.go:130] >    AppArmorEnabled:  false
	I1206 10:45:05.300434  399286 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1206 10:45:05.303425  399286 cli_runner.go:164] Run: docker network inspect functional-196950 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:45:05.320718  399286 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 10:45:05.324954  399286 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1206 10:45:05.325142  399286 kubeadm.go:884] updating cluster {Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:45:05.325270  399286 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 10:45:05.325346  399286 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:45:05.356177  399286 command_runner.go:130] > {
	I1206 10:45:05.356195  399286 command_runner.go:130] >   "images":  [
	I1206 10:45:05.356199  399286 command_runner.go:130] >     {
	I1206 10:45:05.356208  399286 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1206 10:45:05.356213  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.356218  399286 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1206 10:45:05.356222  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356226  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.356235  399286 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1206 10:45:05.356243  399286 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1206 10:45:05.356246  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356251  399286 command_runner.go:130] >       "size":  "111333938",
	I1206 10:45:05.356254  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.356259  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.356262  399286 command_runner.go:130] >     },
	I1206 10:45:05.356265  399286 command_runner.go:130] >     {
	I1206 10:45:05.356272  399286 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1206 10:45:05.356285  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.356291  399286 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1206 10:45:05.356294  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356298  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.356307  399286 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1206 10:45:05.356315  399286 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1206 10:45:05.356318  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356322  399286 command_runner.go:130] >       "size":  "29037500",
	I1206 10:45:05.356326  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.356334  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.356337  399286 command_runner.go:130] >     },
	I1206 10:45:05.356340  399286 command_runner.go:130] >     {
	I1206 10:45:05.356346  399286 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1206 10:45:05.356350  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.356355  399286 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1206 10:45:05.356358  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356362  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.356369  399286 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1206 10:45:05.356377  399286 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1206 10:45:05.356380  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356385  399286 command_runner.go:130] >       "size":  "74491780",
	I1206 10:45:05.356389  399286 command_runner.go:130] >       "username":  "nonroot",
	I1206 10:45:05.356393  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.356396  399286 command_runner.go:130] >     },
	I1206 10:45:05.356399  399286 command_runner.go:130] >     {
	I1206 10:45:05.356405  399286 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1206 10:45:05.356409  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.356428  399286 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1206 10:45:05.356433  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356438  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.356446  399286 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1206 10:45:05.356453  399286 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1206 10:45:05.356457  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356465  399286 command_runner.go:130] >       "size":  "60857170",
	I1206 10:45:05.356469  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.356472  399286 command_runner.go:130] >         "value":  "0"
	I1206 10:45:05.356475  399286 command_runner.go:130] >       },
	I1206 10:45:05.356488  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.356492  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.356495  399286 command_runner.go:130] >     },
	I1206 10:45:05.356498  399286 command_runner.go:130] >     {
	I1206 10:45:05.356505  399286 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1206 10:45:05.356508  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.356513  399286 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1206 10:45:05.356516  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356520  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.356528  399286 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1206 10:45:05.356536  399286 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1206 10:45:05.356539  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356543  399286 command_runner.go:130] >       "size":  "84949999",
	I1206 10:45:05.356546  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.356550  399286 command_runner.go:130] >         "value":  "0"
	I1206 10:45:05.356553  399286 command_runner.go:130] >       },
	I1206 10:45:05.356557  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.356561  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.356564  399286 command_runner.go:130] >     },
	I1206 10:45:05.356567  399286 command_runner.go:130] >     {
	I1206 10:45:05.356573  399286 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1206 10:45:05.356577  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.356583  399286 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1206 10:45:05.356586  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356590  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.356598  399286 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1206 10:45:05.356606  399286 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1206 10:45:05.356609  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356617  399286 command_runner.go:130] >       "size":  "72170325",
	I1206 10:45:05.356623  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.356627  399286 command_runner.go:130] >         "value":  "0"
	I1206 10:45:05.356631  399286 command_runner.go:130] >       },
	I1206 10:45:05.356634  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.356638  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.356641  399286 command_runner.go:130] >     },
	I1206 10:45:05.356643  399286 command_runner.go:130] >     {
	I1206 10:45:05.356650  399286 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1206 10:45:05.356654  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.356659  399286 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1206 10:45:05.356662  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356666  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.356674  399286 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1206 10:45:05.356681  399286 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1206 10:45:05.356684  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356688  399286 command_runner.go:130] >       "size":  "74106775",
	I1206 10:45:05.356692  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.356695  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.356698  399286 command_runner.go:130] >     },
	I1206 10:45:05.356701  399286 command_runner.go:130] >     {
	I1206 10:45:05.356708  399286 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1206 10:45:05.356711  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.356716  399286 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1206 10:45:05.356719  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356723  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.356730  399286 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1206 10:45:05.356747  399286 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1206 10:45:05.356751  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356755  399286 command_runner.go:130] >       "size":  "49822549",
	I1206 10:45:05.356759  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.356763  399286 command_runner.go:130] >         "value":  "0"
	I1206 10:45:05.356766  399286 command_runner.go:130] >       },
	I1206 10:45:05.356770  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.356778  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.356781  399286 command_runner.go:130] >     },
	I1206 10:45:05.356784  399286 command_runner.go:130] >     {
	I1206 10:45:05.356790  399286 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1206 10:45:05.356794  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.356798  399286 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1206 10:45:05.356801  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356805  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.356812  399286 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1206 10:45:05.356820  399286 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1206 10:45:05.356823  399286 command_runner.go:130] >       ],
	I1206 10:45:05.356826  399286 command_runner.go:130] >       "size":  "519884",
	I1206 10:45:05.356830  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.356833  399286 command_runner.go:130] >         "value":  "65535"
	I1206 10:45:05.356836  399286 command_runner.go:130] >       },
	I1206 10:45:05.356840  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.356843  399286 command_runner.go:130] >       "pinned":  true
	I1206 10:45:05.356850  399286 command_runner.go:130] >     }
	I1206 10:45:05.356853  399286 command_runner.go:130] >   ]
	I1206 10:45:05.356857  399286 command_runner.go:130] > }
	I1206 10:45:05.358491  399286 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:45:05.358523  399286 crio.go:433] Images already preloaded, skipping extraction
	I1206 10:45:05.358585  399286 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:45:05.381820  399286 command_runner.go:130] > {
	I1206 10:45:05.381840  399286 command_runner.go:130] >   "images":  [
	I1206 10:45:05.381844  399286 command_runner.go:130] >     {
	I1206 10:45:05.381853  399286 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1206 10:45:05.381857  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.381864  399286 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1206 10:45:05.381867  399286 command_runner.go:130] >       ],
	I1206 10:45:05.381871  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.381880  399286 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1206 10:45:05.381888  399286 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1206 10:45:05.381892  399286 command_runner.go:130] >       ],
	I1206 10:45:05.381896  399286 command_runner.go:130] >       "size":  "111333938",
	I1206 10:45:05.381900  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.381909  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.381912  399286 command_runner.go:130] >     },
	I1206 10:45:05.381916  399286 command_runner.go:130] >     {
	I1206 10:45:05.381922  399286 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1206 10:45:05.381926  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.381932  399286 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1206 10:45:05.381935  399286 command_runner.go:130] >       ],
	I1206 10:45:05.381939  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.381947  399286 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1206 10:45:05.381956  399286 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1206 10:45:05.381959  399286 command_runner.go:130] >       ],
	I1206 10:45:05.381963  399286 command_runner.go:130] >       "size":  "29037500",
	I1206 10:45:05.381967  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.381973  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.381977  399286 command_runner.go:130] >     },
	I1206 10:45:05.381980  399286 command_runner.go:130] >     {
	I1206 10:45:05.381987  399286 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1206 10:45:05.381990  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.381999  399286 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1206 10:45:05.382003  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382007  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.382014  399286 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1206 10:45:05.382022  399286 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1206 10:45:05.382025  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382029  399286 command_runner.go:130] >       "size":  "74491780",
	I1206 10:45:05.382033  399286 command_runner.go:130] >       "username":  "nonroot",
	I1206 10:45:05.382037  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.382040  399286 command_runner.go:130] >     },
	I1206 10:45:05.382043  399286 command_runner.go:130] >     {
	I1206 10:45:05.382049  399286 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1206 10:45:05.382053  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.382058  399286 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1206 10:45:05.382063  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382067  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.382074  399286 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1206 10:45:05.382082  399286 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1206 10:45:05.382085  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382089  399286 command_runner.go:130] >       "size":  "60857170",
	I1206 10:45:05.382093  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.382096  399286 command_runner.go:130] >         "value":  "0"
	I1206 10:45:05.382100  399286 command_runner.go:130] >       },
	I1206 10:45:05.382398  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.382411  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.382415  399286 command_runner.go:130] >     },
	I1206 10:45:05.382419  399286 command_runner.go:130] >     {
	I1206 10:45:05.382427  399286 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1206 10:45:05.382437  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.382443  399286 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1206 10:45:05.382446  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382450  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.382463  399286 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1206 10:45:05.382476  399286 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1206 10:45:05.382479  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382484  399286 command_runner.go:130] >       "size":  "84949999",
	I1206 10:45:05.382492  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.382495  399286 command_runner.go:130] >         "value":  "0"
	I1206 10:45:05.382499  399286 command_runner.go:130] >       },
	I1206 10:45:05.382503  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.382507  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.382510  399286 command_runner.go:130] >     },
	I1206 10:45:05.382514  399286 command_runner.go:130] >     {
	I1206 10:45:05.382524  399286 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1206 10:45:05.382528  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.382534  399286 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1206 10:45:05.382541  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382546  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.382555  399286 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1206 10:45:05.382568  399286 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1206 10:45:05.382571  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382575  399286 command_runner.go:130] >       "size":  "72170325",
	I1206 10:45:05.382579  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.382583  399286 command_runner.go:130] >         "value":  "0"
	I1206 10:45:05.382590  399286 command_runner.go:130] >       },
	I1206 10:45:05.382594  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.382597  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.382601  399286 command_runner.go:130] >     },
	I1206 10:45:05.382604  399286 command_runner.go:130] >     {
	I1206 10:45:05.382615  399286 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1206 10:45:05.382618  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.382624  399286 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1206 10:45:05.382627  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382631  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.382643  399286 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1206 10:45:05.382651  399286 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1206 10:45:05.382658  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382666  399286 command_runner.go:130] >       "size":  "74106775",
	I1206 10:45:05.382672  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.382676  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.382679  399286 command_runner.go:130] >     },
	I1206 10:45:05.382682  399286 command_runner.go:130] >     {
	I1206 10:45:05.382693  399286 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1206 10:45:05.382697  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.382702  399286 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1206 10:45:05.382706  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382710  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.382722  399286 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1206 10:45:05.382745  399286 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1206 10:45:05.382753  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382757  399286 command_runner.go:130] >       "size":  "49822549",
	I1206 10:45:05.382761  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.382765  399286 command_runner.go:130] >         "value":  "0"
	I1206 10:45:05.382768  399286 command_runner.go:130] >       },
	I1206 10:45:05.382772  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.382780  399286 command_runner.go:130] >       "pinned":  false
	I1206 10:45:05.382783  399286 command_runner.go:130] >     },
	I1206 10:45:05.382786  399286 command_runner.go:130] >     {
	I1206 10:45:05.382793  399286 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1206 10:45:05.382797  399286 command_runner.go:130] >       "repoTags":  [
	I1206 10:45:05.382805  399286 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1206 10:45:05.382808  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382812  399286 command_runner.go:130] >       "repoDigests":  [
	I1206 10:45:05.382820  399286 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1206 10:45:05.382832  399286 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1206 10:45:05.382835  399286 command_runner.go:130] >       ],
	I1206 10:45:05.382839  399286 command_runner.go:130] >       "size":  "519884",
	I1206 10:45:05.382843  399286 command_runner.go:130] >       "uid":  {
	I1206 10:45:05.382847  399286 command_runner.go:130] >         "value":  "65535"
	I1206 10:45:05.382857  399286 command_runner.go:130] >       },
	I1206 10:45:05.382861  399286 command_runner.go:130] >       "username":  "",
	I1206 10:45:05.382865  399286 command_runner.go:130] >       "pinned":  true
	I1206 10:45:05.382868  399286 command_runner.go:130] >     }
	I1206 10:45:05.382871  399286 command_runner.go:130] >   ]
	I1206 10:45:05.382874  399286 command_runner.go:130] > }
	I1206 10:45:05.396183  399286 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:45:05.396208  399286 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:45:05.396219  399286 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1206 10:45:05.396325  399286 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-196950 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:45:05.396421  399286 ssh_runner.go:195] Run: crio config
	I1206 10:45:05.425462  399286 command_runner.go:130] ! time="2025-12-06T10:45:05.425119459Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1206 10:45:05.425532  399286 command_runner.go:130] ! time="2025-12-06T10:45:05.425157991Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1206 10:45:05.425754  399286 command_runner.go:130] ! time="2025-12-06T10:45:05.425195308Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1206 10:45:05.425797  399286 command_runner.go:130] ! time="2025-12-06T10:45:05.42522017Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1206 10:45:05.425982  399286 command_runner.go:130] ! time="2025-12-06T10:45:05.425299687Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:45:05.426160  399286 command_runner.go:130] ! time="2025-12-06T10:45:05.42561672Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1206 10:45:05.442529  399286 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1206 10:45:05.470811  399286 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1206 10:45:05.470887  399286 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1206 10:45:05.470910  399286 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1206 10:45:05.470925  399286 command_runner.go:130] > #
	I1206 10:45:05.470961  399286 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1206 10:45:05.470990  399286 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1206 10:45:05.471012  399286 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1206 10:45:05.471037  399286 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1206 10:45:05.471066  399286 command_runner.go:130] > # reload'.
	I1206 10:45:05.471089  399286 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1206 10:45:05.471110  399286 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1206 10:45:05.471132  399286 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1206 10:45:05.471165  399286 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1206 10:45:05.471189  399286 command_runner.go:130] > [crio]
	I1206 10:45:05.471211  399286 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1206 10:45:05.471233  399286 command_runner.go:130] > # containers images, in this directory.
	I1206 10:45:05.471266  399286 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1206 10:45:05.471291  399286 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1206 10:45:05.471336  399286 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1206 10:45:05.471369  399286 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1206 10:45:05.471416  399286 command_runner.go:130] > # imagestore = ""
	I1206 10:45:05.471447  399286 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1206 10:45:05.471467  399286 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1206 10:45:05.471498  399286 command_runner.go:130] > # storage_driver = "overlay"
	I1206 10:45:05.471527  399286 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1206 10:45:05.471540  399286 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1206 10:45:05.471544  399286 command_runner.go:130] > # storage_option = [
	I1206 10:45:05.471548  399286 command_runner.go:130] > # ]
	I1206 10:45:05.471554  399286 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1206 10:45:05.471561  399286 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1206 10:45:05.471566  399286 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1206 10:45:05.471572  399286 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1206 10:45:05.471584  399286 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1206 10:45:05.471601  399286 command_runner.go:130] > # always happen on a node reboot
	I1206 10:45:05.471614  399286 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1206 10:45:05.471624  399286 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1206 10:45:05.471631  399286 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1206 10:45:05.471647  399286 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1206 10:45:05.471665  399286 command_runner.go:130] > # version_file_persist = ""
	I1206 10:45:05.471674  399286 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1206 10:45:05.471685  399286 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1206 10:45:05.471689  399286 command_runner.go:130] > # internal_wipe = true
	I1206 10:45:05.471701  399286 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1206 10:45:05.471736  399286 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1206 10:45:05.471747  399286 command_runner.go:130] > # internal_repair = true
	I1206 10:45:05.471753  399286 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1206 10:45:05.471760  399286 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1206 10:45:05.471768  399286 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1206 10:45:05.471774  399286 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1206 10:45:05.471790  399286 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1206 10:45:05.471793  399286 command_runner.go:130] > [crio.api]
	I1206 10:45:05.471799  399286 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1206 10:45:05.471810  399286 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1206 10:45:05.471817  399286 command_runner.go:130] > # IP address on which the stream server will listen.
	I1206 10:45:05.471822  399286 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1206 10:45:05.471829  399286 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1206 10:45:05.471837  399286 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1206 10:45:05.471841  399286 command_runner.go:130] > # stream_port = "0"
	I1206 10:45:05.471852  399286 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1206 10:45:05.471856  399286 command_runner.go:130] > # stream_enable_tls = false
	I1206 10:45:05.471867  399286 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1206 10:45:05.471871  399286 command_runner.go:130] > # stream_idle_timeout = ""
	I1206 10:45:05.471891  399286 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1206 10:45:05.471897  399286 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1206 10:45:05.471905  399286 command_runner.go:130] > # stream_tls_cert = ""
	I1206 10:45:05.471912  399286 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1206 10:45:05.471918  399286 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1206 10:45:05.471922  399286 command_runner.go:130] > # stream_tls_key = ""
	I1206 10:45:05.471928  399286 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1206 10:45:05.471937  399286 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1206 10:45:05.471942  399286 command_runner.go:130] > # automatically pick up the changes.
	I1206 10:45:05.471950  399286 command_runner.go:130] > # stream_tls_ca = ""
	I1206 10:45:05.471981  399286 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1206 10:45:05.471991  399286 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1206 10:45:05.471999  399286 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1206 10:45:05.472004  399286 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1206 10:45:05.472010  399286 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1206 10:45:05.472018  399286 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1206 10:45:05.472022  399286 command_runner.go:130] > [crio.runtime]
	I1206 10:45:05.472029  399286 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1206 10:45:05.472036  399286 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1206 10:45:05.472041  399286 command_runner.go:130] > # "nofile=1024:2048"
	I1206 10:45:05.472057  399286 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1206 10:45:05.472061  399286 command_runner.go:130] > # default_ulimits = [
	I1206 10:45:05.472064  399286 command_runner.go:130] > # ]
	I1206 10:45:05.472070  399286 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1206 10:45:05.472077  399286 command_runner.go:130] > # no_pivot = false
	I1206 10:45:05.472083  399286 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1206 10:45:05.472090  399286 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1206 10:45:05.472095  399286 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1206 10:45:05.472103  399286 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1206 10:45:05.472108  399286 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1206 10:45:05.472117  399286 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1206 10:45:05.472123  399286 command_runner.go:130] > # conmon = ""
	I1206 10:45:05.472127  399286 command_runner.go:130] > # Cgroup setting for conmon
	I1206 10:45:05.472137  399286 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1206 10:45:05.472143  399286 command_runner.go:130] > conmon_cgroup = "pod"
	I1206 10:45:05.472152  399286 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1206 10:45:05.472157  399286 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1206 10:45:05.472164  399286 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1206 10:45:05.472168  399286 command_runner.go:130] > # conmon_env = [
	I1206 10:45:05.472173  399286 command_runner.go:130] > # ]
	I1206 10:45:05.472180  399286 command_runner.go:130] > # Additional environment variables to set for all the
	I1206 10:45:05.472188  399286 command_runner.go:130] > # containers. These are overridden if set in the
	I1206 10:45:05.472198  399286 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1206 10:45:05.472204  399286 command_runner.go:130] > # default_env = [
	I1206 10:45:05.472208  399286 command_runner.go:130] > # ]
	I1206 10:45:05.472213  399286 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1206 10:45:05.472223  399286 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1206 10:45:05.472229  399286 command_runner.go:130] > # selinux = false
	I1206 10:45:05.472236  399286 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1206 10:45:05.472246  399286 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1206 10:45:05.472252  399286 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:45:05.472255  399286 command_runner.go:130] > # seccomp_profile = ""
	I1206 10:45:05.472262  399286 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1206 10:45:05.472270  399286 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:45:05.472274  399286 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1206 10:45:05.472281  399286 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1206 10:45:05.472287  399286 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1206 10:45:05.472295  399286 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1206 10:45:05.472302  399286 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1206 10:45:05.472315  399286 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:45:05.472320  399286 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1206 10:45:05.472326  399286 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1206 10:45:05.472330  399286 command_runner.go:130] > # the cgroup blockio controller.
	I1206 10:45:05.472337  399286 command_runner.go:130] > # blockio_config_file = ""
	I1206 10:45:05.472345  399286 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1206 10:45:05.472353  399286 command_runner.go:130] > # blockio parameters.
	I1206 10:45:05.472357  399286 command_runner.go:130] > # blockio_reload = false
	I1206 10:45:05.472364  399286 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1206 10:45:05.472367  399286 command_runner.go:130] > # irqbalance daemon.
	I1206 10:45:05.472373  399286 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1206 10:45:05.472381  399286 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1206 10:45:05.472391  399286 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1206 10:45:05.472412  399286 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1206 10:45:05.472419  399286 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1206 10:45:05.472428  399286 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1206 10:45:05.472437  399286 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:45:05.472448  399286 command_runner.go:130] > # rdt_config_file = ""
	I1206 10:45:05.472455  399286 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1206 10:45:05.472459  399286 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1206 10:45:05.472465  399286 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1206 10:45:05.472472  399286 command_runner.go:130] > # separate_pull_cgroup = ""
	I1206 10:45:05.472479  399286 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1206 10:45:05.472486  399286 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1206 10:45:05.472498  399286 command_runner.go:130] > # will be added.
	I1206 10:45:05.472503  399286 command_runner.go:130] > # default_capabilities = [
	I1206 10:45:05.472506  399286 command_runner.go:130] > # 	"CHOWN",
	I1206 10:45:05.472510  399286 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1206 10:45:05.472520  399286 command_runner.go:130] > # 	"FSETID",
	I1206 10:45:05.472525  399286 command_runner.go:130] > # 	"FOWNER",
	I1206 10:45:05.472529  399286 command_runner.go:130] > # 	"SETGID",
	I1206 10:45:05.472539  399286 command_runner.go:130] > # 	"SETUID",
	I1206 10:45:05.472558  399286 command_runner.go:130] > # 	"SETPCAP",
	I1206 10:45:05.472573  399286 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1206 10:45:05.472576  399286 command_runner.go:130] > # 	"KILL",
	I1206 10:45:05.472579  399286 command_runner.go:130] > # ]
	I1206 10:45:05.472587  399286 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1206 10:45:05.472602  399286 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1206 10:45:05.472607  399286 command_runner.go:130] > # add_inheritable_capabilities = false
	I1206 10:45:05.472616  399286 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1206 10:45:05.472628  399286 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1206 10:45:05.472632  399286 command_runner.go:130] > default_sysctls = [
	I1206 10:45:05.472637  399286 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1206 10:45:05.472643  399286 command_runner.go:130] > ]
	I1206 10:45:05.472650  399286 command_runner.go:130] > # List of devices on the host that a
	I1206 10:45:05.472660  399286 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1206 10:45:05.472664  399286 command_runner.go:130] > # allowed_devices = [
	I1206 10:45:05.472670  399286 command_runner.go:130] > # 	"/dev/fuse",
	I1206 10:45:05.472674  399286 command_runner.go:130] > # 	"/dev/net/tun",
	I1206 10:45:05.472681  399286 command_runner.go:130] > # ]
	I1206 10:45:05.472689  399286 command_runner.go:130] > # List of additional devices. specified as
	I1206 10:45:05.472697  399286 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1206 10:45:05.472703  399286 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1206 10:45:05.472711  399286 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1206 10:45:05.472716  399286 command_runner.go:130] > # additional_devices = [
	I1206 10:45:05.472722  399286 command_runner.go:130] > # ]
	I1206 10:45:05.472730  399286 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1206 10:45:05.472737  399286 command_runner.go:130] > # cdi_spec_dirs = [
	I1206 10:45:05.472743  399286 command_runner.go:130] > # 	"/etc/cdi",
	I1206 10:45:05.472747  399286 command_runner.go:130] > # 	"/var/run/cdi",
	I1206 10:45:05.472750  399286 command_runner.go:130] > # ]
	I1206 10:45:05.472757  399286 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1206 10:45:05.472766  399286 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1206 10:45:05.472770  399286 command_runner.go:130] > # Defaults to false.
	I1206 10:45:05.472775  399286 command_runner.go:130] > # device_ownership_from_security_context = false
	I1206 10:45:05.472782  399286 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1206 10:45:05.472791  399286 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1206 10:45:05.472795  399286 command_runner.go:130] > # hooks_dir = [
	I1206 10:45:05.472800  399286 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1206 10:45:05.472806  399286 command_runner.go:130] > # ]
	I1206 10:45:05.472813  399286 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1206 10:45:05.472819  399286 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1206 10:45:05.472827  399286 command_runner.go:130] > # its default mounts from the following two files:
	I1206 10:45:05.472830  399286 command_runner.go:130] > #
	I1206 10:45:05.472836  399286 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1206 10:45:05.472845  399286 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1206 10:45:05.472852  399286 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1206 10:45:05.472858  399286 command_runner.go:130] > #
	I1206 10:45:05.472865  399286 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1206 10:45:05.472871  399286 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1206 10:45:05.472878  399286 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1206 10:45:05.472887  399286 command_runner.go:130] > #      only add mounts it finds in this file.
	I1206 10:45:05.472896  399286 command_runner.go:130] > #
	I1206 10:45:05.472902  399286 command_runner.go:130] > # default_mounts_file = ""
	I1206 10:45:05.472910  399286 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1206 10:45:05.472919  399286 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1206 10:45:05.472932  399286 command_runner.go:130] > # pids_limit = -1
	I1206 10:45:05.472938  399286 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1206 10:45:05.472947  399286 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1206 10:45:05.472961  399286 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1206 10:45:05.472979  399286 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1206 10:45:05.472983  399286 command_runner.go:130] > # log_size_max = -1
	I1206 10:45:05.472990  399286 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1206 10:45:05.472997  399286 command_runner.go:130] > # log_to_journald = false
	I1206 10:45:05.473006  399286 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1206 10:45:05.473011  399286 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1206 10:45:05.473016  399286 command_runner.go:130] > # Path to directory for container attach sockets.
	I1206 10:45:05.473024  399286 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1206 10:45:05.473032  399286 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1206 10:45:05.473036  399286 command_runner.go:130] > # bind_mount_prefix = ""
	I1206 10:45:05.473044  399286 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1206 10:45:05.473049  399286 command_runner.go:130] > # read_only = false
	I1206 10:45:05.473063  399286 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1206 10:45:05.473070  399286 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1206 10:45:05.473074  399286 command_runner.go:130] > # live configuration reload.
	I1206 10:45:05.473085  399286 command_runner.go:130] > # log_level = "info"
	I1206 10:45:05.473092  399286 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1206 10:45:05.473097  399286 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:45:05.473101  399286 command_runner.go:130] > # log_filter = ""
	I1206 10:45:05.473110  399286 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1206 10:45:05.473119  399286 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1206 10:45:05.473123  399286 command_runner.go:130] > # separated by comma.
	I1206 10:45:05.473132  399286 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1206 10:45:05.473138  399286 command_runner.go:130] > # uid_mappings = ""
	I1206 10:45:05.473145  399286 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1206 10:45:05.473155  399286 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1206 10:45:05.473162  399286 command_runner.go:130] > # separated by comma.
	I1206 10:45:05.473171  399286 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1206 10:45:05.473178  399286 command_runner.go:130] > # gid_mappings = ""
	I1206 10:45:05.473185  399286 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1206 10:45:05.473197  399286 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1206 10:45:05.473206  399286 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1206 10:45:05.473217  399286 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1206 10:45:05.473223  399286 command_runner.go:130] > # minimum_mappable_uid = -1
	I1206 10:45:05.473230  399286 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1206 10:45:05.473238  399286 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1206 10:45:05.473249  399286 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1206 10:45:05.473260  399286 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1206 10:45:05.473264  399286 command_runner.go:130] > # minimum_mappable_gid = -1
	I1206 10:45:05.473270  399286 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1206 10:45:05.473282  399286 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1206 10:45:05.473287  399286 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1206 10:45:05.473292  399286 command_runner.go:130] > # ctr_stop_timeout = 30
	I1206 10:45:05.473298  399286 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1206 10:45:05.473307  399286 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1206 10:45:05.473312  399286 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1206 10:45:05.473317  399286 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1206 10:45:05.473323  399286 command_runner.go:130] > # drop_infra_ctr = true
	I1206 10:45:05.473330  399286 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1206 10:45:05.473339  399286 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1206 10:45:05.473347  399286 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1206 10:45:05.473351  399286 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1206 10:45:05.473362  399286 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1206 10:45:05.473373  399286 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1206 10:45:05.473378  399286 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1206 10:45:05.473383  399286 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1206 10:45:05.473389  399286 command_runner.go:130] > # shared_cpuset = ""
	I1206 10:45:05.473397  399286 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1206 10:45:05.473408  399286 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1206 10:45:05.473415  399286 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1206 10:45:05.473423  399286 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1206 10:45:05.473429  399286 command_runner.go:130] > # pinns_path = ""
	I1206 10:45:05.473435  399286 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1206 10:45:05.473442  399286 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1206 10:45:05.473446  399286 command_runner.go:130] > # enable_criu_support = true
	I1206 10:45:05.473458  399286 command_runner.go:130] > # Enable/disable the generation of the container,
	I1206 10:45:05.473465  399286 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1206 10:45:05.473469  399286 command_runner.go:130] > # enable_pod_events = false
	I1206 10:45:05.473476  399286 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1206 10:45:05.473483  399286 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1206 10:45:05.473487  399286 command_runner.go:130] > # default_runtime = "crun"
	I1206 10:45:05.473492  399286 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1206 10:45:05.473502  399286 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1206 10:45:05.473513  399286 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1206 10:45:05.473521  399286 command_runner.go:130] > # creation as a file is not desired either.
	I1206 10:45:05.473531  399286 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1206 10:45:05.473540  399286 command_runner.go:130] > # the hostname is being managed dynamically.
	I1206 10:45:05.473551  399286 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1206 10:45:05.473554  399286 command_runner.go:130] > # ]
	I1206 10:45:05.473561  399286 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1206 10:45:05.473567  399286 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1206 10:45:05.473576  399286 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1206 10:45:05.473582  399286 command_runner.go:130] > # Each entry in the table should follow the format:
	I1206 10:45:05.473596  399286 command_runner.go:130] > #
	I1206 10:45:05.473602  399286 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1206 10:45:05.473606  399286 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1206 10:45:05.473610  399286 command_runner.go:130] > # runtime_type = "oci"
	I1206 10:45:05.473616  399286 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1206 10:45:05.473623  399286 command_runner.go:130] > # inherit_default_runtime = false
	I1206 10:45:05.473628  399286 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1206 10:45:05.473632  399286 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1206 10:45:05.473646  399286 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1206 10:45:05.473650  399286 command_runner.go:130] > # monitor_env = []
	I1206 10:45:05.473654  399286 command_runner.go:130] > # privileged_without_host_devices = false
	I1206 10:45:05.473659  399286 command_runner.go:130] > # allowed_annotations = []
	I1206 10:45:05.473667  399286 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1206 10:45:05.473673  399286 command_runner.go:130] > # no_sync_log = false
	I1206 10:45:05.473677  399286 command_runner.go:130] > # default_annotations = {}
	I1206 10:45:05.473682  399286 command_runner.go:130] > # stream_websockets = false
	I1206 10:45:05.473689  399286 command_runner.go:130] > # seccomp_profile = ""
	I1206 10:45:05.473708  399286 command_runner.go:130] > # Where:
	I1206 10:45:05.473717  399286 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1206 10:45:05.473724  399286 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1206 10:45:05.473730  399286 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1206 10:45:05.473739  399286 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1206 10:45:05.473743  399286 command_runner.go:130] > #   in $PATH.
	I1206 10:45:05.473749  399286 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1206 10:45:05.473754  399286 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1206 10:45:05.473763  399286 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1206 10:45:05.473768  399286 command_runner.go:130] > #   state.
	I1206 10:45:05.473775  399286 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1206 10:45:05.473789  399286 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1206 10:45:05.473796  399286 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1206 10:45:05.473802  399286 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1206 10:45:05.473810  399286 command_runner.go:130] > #   the values from the default runtime on load time.
	I1206 10:45:05.473816  399286 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1206 10:45:05.473824  399286 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1206 10:45:05.473834  399286 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1206 10:45:05.473841  399286 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1206 10:45:05.473846  399286 command_runner.go:130] > #   The currently recognized values are:
	I1206 10:45:05.473852  399286 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1206 10:45:05.473862  399286 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1206 10:45:05.473868  399286 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1206 10:45:05.473876  399286 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1206 10:45:05.473890  399286 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1206 10:45:05.473900  399286 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1206 10:45:05.473907  399286 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1206 10:45:05.473914  399286 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1206 10:45:05.473924  399286 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1206 10:45:05.473930  399286 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1206 10:45:05.473938  399286 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1206 10:45:05.473946  399286 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1206 10:45:05.473955  399286 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1206 10:45:05.473961  399286 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1206 10:45:05.473970  399286 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1206 10:45:05.473978  399286 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1206 10:45:05.473988  399286 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1206 10:45:05.473992  399286 command_runner.go:130] > #   deprecated option "conmon".
	I1206 10:45:05.474000  399286 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1206 10:45:05.474008  399286 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1206 10:45:05.474015  399286 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1206 10:45:05.474020  399286 command_runner.go:130] > #   should be moved to the container's cgroup
	I1206 10:45:05.474027  399286 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1206 10:45:05.474034  399286 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1206 10:45:05.474042  399286 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1206 10:45:05.474048  399286 command_runner.go:130] > #   conmon-rs by using:
	I1206 10:45:05.474057  399286 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1206 10:45:05.474070  399286 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1206 10:45:05.474077  399286 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1206 10:45:05.474091  399286 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1206 10:45:05.474096  399286 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1206 10:45:05.474106  399286 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1206 10:45:05.474114  399286 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1206 10:45:05.474122  399286 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1206 10:45:05.474130  399286 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1206 10:45:05.474143  399286 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1206 10:45:05.474148  399286 command_runner.go:130] > #   when a machine crash happens.
	I1206 10:45:05.474159  399286 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1206 10:45:05.474172  399286 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1206 10:45:05.474181  399286 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1206 10:45:05.474188  399286 command_runner.go:130] > #   seccomp profile for the runtime.
	I1206 10:45:05.474212  399286 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1206 10:45:05.474223  399286 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1206 10:45:05.474227  399286 command_runner.go:130] > #
	I1206 10:45:05.474233  399286 command_runner.go:130] > # Using the seccomp notifier feature:
	I1206 10:45:05.474236  399286 command_runner.go:130] > #
	I1206 10:45:05.474244  399286 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1206 10:45:05.474254  399286 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1206 10:45:05.474257  399286 command_runner.go:130] > #
	I1206 10:45:05.474264  399286 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1206 10:45:05.474273  399286 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1206 10:45:05.474276  399286 command_runner.go:130] > #
	I1206 10:45:05.474283  399286 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1206 10:45:05.474286  399286 command_runner.go:130] > # feature.
	I1206 10:45:05.474289  399286 command_runner.go:130] > #
	I1206 10:45:05.474299  399286 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1206 10:45:05.474307  399286 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1206 10:45:05.474314  399286 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1206 10:45:05.474322  399286 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1206 10:45:05.474329  399286 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1206 10:45:05.474336  399286 command_runner.go:130] > #
	I1206 10:45:05.474344  399286 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1206 10:45:05.474350  399286 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1206 10:45:05.474354  399286 command_runner.go:130] > #
	I1206 10:45:05.474361  399286 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1206 10:45:05.474371  399286 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1206 10:45:05.474374  399286 command_runner.go:130] > #
	I1206 10:45:05.474380  399286 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1206 10:45:05.474386  399286 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1206 10:45:05.474392  399286 command_runner.go:130] > # limitation.
	I1206 10:45:05.474401  399286 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1206 10:45:05.474409  399286 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1206 10:45:05.474413  399286 command_runner.go:130] > runtime_type = ""
	I1206 10:45:05.474417  399286 command_runner.go:130] > runtime_root = "/run/crun"
	I1206 10:45:05.474422  399286 command_runner.go:130] > inherit_default_runtime = false
	I1206 10:45:05.474426  399286 command_runner.go:130] > runtime_config_path = ""
	I1206 10:45:05.474432  399286 command_runner.go:130] > container_min_memory = ""
	I1206 10:45:05.474437  399286 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1206 10:45:05.474442  399286 command_runner.go:130] > monitor_cgroup = "pod"
	I1206 10:45:05.474448  399286 command_runner.go:130] > monitor_exec_cgroup = ""
	I1206 10:45:05.474453  399286 command_runner.go:130] > allowed_annotations = [
	I1206 10:45:05.474461  399286 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1206 10:45:05.474464  399286 command_runner.go:130] > ]
	I1206 10:45:05.474469  399286 command_runner.go:130] > privileged_without_host_devices = false
	I1206 10:45:05.474473  399286 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1206 10:45:05.474478  399286 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1206 10:45:05.474484  399286 command_runner.go:130] > runtime_type = ""
	I1206 10:45:05.474489  399286 command_runner.go:130] > runtime_root = "/run/runc"
	I1206 10:45:05.474496  399286 command_runner.go:130] > inherit_default_runtime = false
	I1206 10:45:05.474501  399286 command_runner.go:130] > runtime_config_path = ""
	I1206 10:45:05.474506  399286 command_runner.go:130] > container_min_memory = ""
	I1206 10:45:05.474513  399286 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1206 10:45:05.474518  399286 command_runner.go:130] > monitor_cgroup = "pod"
	I1206 10:45:05.474522  399286 command_runner.go:130] > monitor_exec_cgroup = ""
	I1206 10:45:05.474530  399286 command_runner.go:130] > privileged_without_host_devices = false
	I1206 10:45:05.474540  399286 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1206 10:45:05.474548  399286 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1206 10:45:05.474556  399286 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1206 10:45:05.474564  399286 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1206 10:45:05.474575  399286 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1206 10:45:05.474592  399286 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1206 10:45:05.474602  399286 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1206 10:45:05.474610  399286 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1206 10:45:05.474622  399286 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1206 10:45:05.474635  399286 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1206 10:45:05.474642  399286 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1206 10:45:05.474652  399286 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1206 10:45:05.474656  399286 command_runner.go:130] > # Example:
	I1206 10:45:05.474664  399286 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1206 10:45:05.474672  399286 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1206 10:45:05.474677  399286 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1206 10:45:05.474686  399286 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1206 10:45:05.474691  399286 command_runner.go:130] > # cpuset = "0-1"
	I1206 10:45:05.474703  399286 command_runner.go:130] > # cpushares = "5"
	I1206 10:45:05.474708  399286 command_runner.go:130] > # cpuquota = "1000"
	I1206 10:45:05.474712  399286 command_runner.go:130] > # cpuperiod = "100000"
	I1206 10:45:05.474716  399286 command_runner.go:130] > # cpulimit = "35"
	I1206 10:45:05.474720  399286 command_runner.go:130] > # Where:
	I1206 10:45:05.474724  399286 command_runner.go:130] > # The workload name is workload-type.
	I1206 10:45:05.474738  399286 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1206 10:45:05.474744  399286 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1206 10:45:05.474749  399286 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1206 10:45:05.474761  399286 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1206 10:45:05.474777  399286 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1206 10:45:05.474783  399286 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1206 10:45:05.474790  399286 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1206 10:45:05.474797  399286 command_runner.go:130] > # Default value is set to true
	I1206 10:45:05.474803  399286 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1206 10:45:05.474809  399286 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1206 10:45:05.474821  399286 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1206 10:45:05.474826  399286 command_runner.go:130] > # Default value is set to 'false'
	I1206 10:45:05.474830  399286 command_runner.go:130] > # disable_hostport_mapping = false
	I1206 10:45:05.474836  399286 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1206 10:45:05.474847  399286 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1206 10:45:05.474853  399286 command_runner.go:130] > # timezone = ""
	I1206 10:45:05.474860  399286 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1206 10:45:05.474866  399286 command_runner.go:130] > #
	I1206 10:45:05.474874  399286 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1206 10:45:05.474883  399286 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1206 10:45:05.474889  399286 command_runner.go:130] > [crio.image]
	I1206 10:45:05.474895  399286 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1206 10:45:05.474899  399286 command_runner.go:130] > # default_transport = "docker://"
	I1206 10:45:05.474913  399286 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1206 10:45:05.474920  399286 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1206 10:45:05.474924  399286 command_runner.go:130] > # global_auth_file = ""
	I1206 10:45:05.474929  399286 command_runner.go:130] > # The image used to instantiate infra containers.
	I1206 10:45:05.474938  399286 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:45:05.474943  399286 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1206 10:45:05.474952  399286 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1206 10:45:05.474959  399286 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1206 10:45:05.474967  399286 command_runner.go:130] > # This option supports live configuration reload.
	I1206 10:45:05.474972  399286 command_runner.go:130] > # pause_image_auth_file = ""
	I1206 10:45:05.474977  399286 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1206 10:45:05.474984  399286 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1206 10:45:05.474994  399286 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1206 10:45:05.475000  399286 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1206 10:45:05.475009  399286 command_runner.go:130] > # pause_command = "/pause"
	I1206 10:45:05.475015  399286 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1206 10:45:05.475021  399286 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1206 10:45:05.475030  399286 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1206 10:45:05.475036  399286 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1206 10:45:05.475044  399286 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1206 10:45:05.475051  399286 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1206 10:45:05.475058  399286 command_runner.go:130] > # pinned_images = [
	I1206 10:45:05.475061  399286 command_runner.go:130] > # ]
	I1206 10:45:05.475067  399286 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1206 10:45:05.475074  399286 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1206 10:45:05.475083  399286 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1206 10:45:05.475090  399286 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1206 10:45:05.475098  399286 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1206 10:45:05.475104  399286 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1206 10:45:05.475110  399286 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1206 10:45:05.475120  399286 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1206 10:45:05.475129  399286 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1206 10:45:05.475138  399286 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1206 10:45:05.475145  399286 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1206 10:45:05.475150  399286 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1206 10:45:05.475156  399286 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1206 10:45:05.475165  399286 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1206 10:45:05.475169  399286 command_runner.go:130] > # changing them here.
	I1206 10:45:05.475176  399286 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1206 10:45:05.475183  399286 command_runner.go:130] > # insecure_registries = [
	I1206 10:45:05.475186  399286 command_runner.go:130] > # ]
	I1206 10:45:05.475193  399286 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1206 10:45:05.475201  399286 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1206 10:45:05.475208  399286 command_runner.go:130] > # image_volumes = "mkdir"
	I1206 10:45:05.475214  399286 command_runner.go:130] > # Temporary directory to use for storing big files
	I1206 10:45:05.475220  399286 command_runner.go:130] > # big_files_temporary_dir = ""
	I1206 10:45:05.475226  399286 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1206 10:45:05.475236  399286 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1206 10:45:05.475241  399286 command_runner.go:130] > # auto_reload_registries = false
	I1206 10:45:05.475247  399286 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1206 10:45:05.475257  399286 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1206 10:45:05.475267  399286 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1206 10:45:05.475271  399286 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1206 10:45:05.475277  399286 command_runner.go:130] > # The mode of short name resolution.
	I1206 10:45:05.475284  399286 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1206 10:45:05.475293  399286 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1206 10:45:05.475298  399286 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1206 10:45:05.475314  399286 command_runner.go:130] > # short_name_mode = "enforcing"
	I1206 10:45:05.475321  399286 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1206 10:45:05.475327  399286 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1206 10:45:05.475335  399286 command_runner.go:130] > # oci_artifact_mount_support = true
	I1206 10:45:05.475343  399286 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1206 10:45:05.475349  399286 command_runner.go:130] > # CNI plugins.
	I1206 10:45:05.475353  399286 command_runner.go:130] > [crio.network]
	I1206 10:45:05.475360  399286 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1206 10:45:05.475368  399286 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1206 10:45:05.475386  399286 command_runner.go:130] > # cni_default_network = ""
	I1206 10:45:05.475398  399286 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1206 10:45:05.475407  399286 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1206 10:45:05.475413  399286 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1206 10:45:05.475419  399286 command_runner.go:130] > # plugin_dirs = [
	I1206 10:45:05.475424  399286 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1206 10:45:05.475429  399286 command_runner.go:130] > # ]
	I1206 10:45:05.475434  399286 command_runner.go:130] > # List of included pod metrics.
	I1206 10:45:05.475441  399286 command_runner.go:130] > # included_pod_metrics = [
	I1206 10:45:05.475445  399286 command_runner.go:130] > # ]
	I1206 10:45:05.475451  399286 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1206 10:45:05.475457  399286 command_runner.go:130] > [crio.metrics]
	I1206 10:45:05.475463  399286 command_runner.go:130] > # Globally enable or disable metrics support.
	I1206 10:45:05.475467  399286 command_runner.go:130] > # enable_metrics = false
	I1206 10:45:05.475472  399286 command_runner.go:130] > # Specify enabled metrics collectors.
	I1206 10:45:05.475476  399286 command_runner.go:130] > # Per default all metrics are enabled.
	I1206 10:45:05.475483  399286 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1206 10:45:05.475490  399286 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1206 10:45:05.475497  399286 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1206 10:45:05.475501  399286 command_runner.go:130] > # metrics_collectors = [
	I1206 10:45:05.475505  399286 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1206 10:45:05.475510  399286 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1206 10:45:05.475518  399286 command_runner.go:130] > # 	"containers_oom_total",
	I1206 10:45:05.475522  399286 command_runner.go:130] > # 	"processes_defunct",
	I1206 10:45:05.475528  399286 command_runner.go:130] > # 	"operations_total",
	I1206 10:45:05.475533  399286 command_runner.go:130] > # 	"operations_latency_seconds",
	I1206 10:45:05.475540  399286 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1206 10:45:05.475547  399286 command_runner.go:130] > # 	"operations_errors_total",
	I1206 10:45:05.475554  399286 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1206 10:45:05.475559  399286 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1206 10:45:05.475564  399286 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1206 10:45:05.475571  399286 command_runner.go:130] > # 	"image_pulls_success_total",
	I1206 10:45:05.475576  399286 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1206 10:45:05.475583  399286 command_runner.go:130] > # 	"containers_oom_count_total",
	I1206 10:45:05.475590  399286 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1206 10:45:05.475602  399286 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1206 10:45:05.475607  399286 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1206 10:45:05.475610  399286 command_runner.go:130] > # ]
	I1206 10:45:05.475616  399286 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1206 10:45:05.475620  399286 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1206 10:45:05.475626  399286 command_runner.go:130] > # The port on which the metrics server will listen.
	I1206 10:45:05.475639  399286 command_runner.go:130] > # metrics_port = 9090
	I1206 10:45:05.475646  399286 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1206 10:45:05.475649  399286 command_runner.go:130] > # metrics_socket = ""
	I1206 10:45:05.475657  399286 command_runner.go:130] > # The certificate for the secure metrics server.
	I1206 10:45:05.475670  399286 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1206 10:45:05.475677  399286 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1206 10:45:05.475691  399286 command_runner.go:130] > # certificate on any modification event.
	I1206 10:45:05.475695  399286 command_runner.go:130] > # metrics_cert = ""
	I1206 10:45:05.475703  399286 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1206 10:45:05.475708  399286 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1206 10:45:05.475712  399286 command_runner.go:130] > # metrics_key = ""
	I1206 10:45:05.475720  399286 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1206 10:45:05.475727  399286 command_runner.go:130] > [crio.tracing]
	I1206 10:45:05.475732  399286 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1206 10:45:05.475737  399286 command_runner.go:130] > # enable_tracing = false
	I1206 10:45:05.475748  399286 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1206 10:45:05.475753  399286 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1206 10:45:05.475767  399286 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1206 10:45:05.475772  399286 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1206 10:45:05.475781  399286 command_runner.go:130] > # CRI-O NRI configuration.
	I1206 10:45:05.475784  399286 command_runner.go:130] > [crio.nri]
	I1206 10:45:05.475789  399286 command_runner.go:130] > # Globally enable or disable NRI.
	I1206 10:45:05.475792  399286 command_runner.go:130] > # enable_nri = true
	I1206 10:45:05.475799  399286 command_runner.go:130] > # NRI socket to listen on.
	I1206 10:45:05.475804  399286 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1206 10:45:05.475811  399286 command_runner.go:130] > # NRI plugin directory to use.
	I1206 10:45:05.475817  399286 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1206 10:45:05.475825  399286 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1206 10:45:05.475830  399286 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1206 10:45:05.475835  399286 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1206 10:45:05.475891  399286 command_runner.go:130] > # nri_disable_connections = false
	I1206 10:45:05.475901  399286 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1206 10:45:05.475906  399286 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1206 10:45:05.475911  399286 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1206 10:45:05.475918  399286 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1206 10:45:05.475923  399286 command_runner.go:130] > # NRI default validator configuration.
	I1206 10:45:05.475933  399286 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1206 10:45:05.475940  399286 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1206 10:45:05.475946  399286 command_runner.go:130] > # can be restricted/rejected:
	I1206 10:45:05.475950  399286 command_runner.go:130] > # - OCI hook injection
	I1206 10:45:05.475958  399286 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1206 10:45:05.475964  399286 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1206 10:45:05.475969  399286 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1206 10:45:05.475976  399286 command_runner.go:130] > # - adjustment of linux namespaces
	I1206 10:45:05.475983  399286 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1206 10:45:05.475990  399286 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1206 10:45:05.476000  399286 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1206 10:45:05.476005  399286 command_runner.go:130] > #
	I1206 10:45:05.476012  399286 command_runner.go:130] > # [crio.nri.default_validator]
	I1206 10:45:05.476020  399286 command_runner.go:130] > # nri_enable_default_validator = false
	I1206 10:45:05.476026  399286 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1206 10:45:05.476035  399286 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1206 10:45:05.476042  399286 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1206 10:45:05.476048  399286 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1206 10:45:05.476056  399286 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1206 10:45:05.476061  399286 command_runner.go:130] > # nri_validator_required_plugins = [
	I1206 10:45:05.476064  399286 command_runner.go:130] > # ]
	I1206 10:45:05.476070  399286 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1206 10:45:05.476079  399286 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1206 10:45:05.476083  399286 command_runner.go:130] > [crio.stats]
	I1206 10:45:05.476089  399286 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1206 10:45:05.476095  399286 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1206 10:45:05.476102  399286 command_runner.go:130] > # stats_collection_period = 0
	I1206 10:45:05.476109  399286 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1206 10:45:05.476119  399286 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1206 10:45:05.476124  399286 command_runner.go:130] > # collection_period = 0
	I1206 10:45:05.476211  399286 cni.go:84] Creating CNI manager for ""
	I1206 10:45:05.476226  399286 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:45:05.476254  399286 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 10:45:05.476282  399286 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-196950 NodeName:functional-196950 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:45:05.476417  399286 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-196950"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:45:05.476505  399286 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 10:45:05.483758  399286 command_runner.go:130] > kubeadm
	I1206 10:45:05.483779  399286 command_runner.go:130] > kubectl
	I1206 10:45:05.483784  399286 command_runner.go:130] > kubelet
	I1206 10:45:05.484784  399286 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:45:05.484852  399286 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:45:05.492924  399286 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1206 10:45:05.506239  399286 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 10:45:05.519506  399286 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1206 10:45:05.533524  399286 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:45:05.537326  399286 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1206 10:45:05.537418  399286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:45:05.647140  399286 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:45:05.721344  399286 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950 for IP: 192.168.49.2
	I1206 10:45:05.721367  399286 certs.go:195] generating shared ca certs ...
	I1206 10:45:05.721384  399286 certs.go:227] acquiring lock for ca certs: {Name:mke2ec61a37b6f3abbcbeb9abd23d6a19d011dd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:45:05.721593  399286 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.key
	I1206 10:45:05.721667  399286 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.key
	I1206 10:45:05.721683  399286 certs.go:257] generating profile certs ...
	I1206 10:45:05.721813  399286 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.key
	I1206 10:45:05.721910  399286 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.key.a77b39a6
	I1206 10:45:05.721994  399286 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.key
	I1206 10:45:05.722034  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1206 10:45:05.722057  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1206 10:45:05.722073  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1206 10:45:05.722118  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1206 10:45:05.722158  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1206 10:45:05.722199  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1206 10:45:05.722217  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1206 10:45:05.722228  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1206 10:45:05.722301  399286 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855.pem (1338 bytes)
	W1206 10:45:05.722365  399286 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855_empty.pem, impossibly tiny 0 bytes
	I1206 10:45:05.722388  399286 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 10:45:05.722448  399286 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem (1082 bytes)
	I1206 10:45:05.722502  399286 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:45:05.722537  399286 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem (1679 bytes)
	I1206 10:45:05.722611  399286 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem (1708 bytes)
	I1206 10:45:05.722670  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:45:05.722691  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855.pem -> /usr/share/ca-certificates/364855.pem
	I1206 10:45:05.722718  399286 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem -> /usr/share/ca-certificates/3648552.pem
	I1206 10:45:05.723349  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:45:05.743026  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 10:45:05.763126  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:45:05.783337  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 10:45:05.802756  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 10:45:05.821457  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 10:45:05.839993  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:45:05.858402  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 10:45:05.876528  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:45:05.894729  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855.pem --> /usr/share/ca-certificates/364855.pem (1338 bytes)
	I1206 10:45:05.912947  399286 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem --> /usr/share/ca-certificates/3648552.pem (1708 bytes)
	I1206 10:45:05.931356  399286 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:45:05.945284  399286 ssh_runner.go:195] Run: openssl version
	I1206 10:45:05.951573  399286 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1206 10:45:05.951648  399286 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:45:05.959293  399286 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:45:05.967114  399286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:45:05.970832  399286 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  6 10:26 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:45:05.971103  399286 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:26 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:45:05.971168  399286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:45:06.014236  399286 command_runner.go:130] > b5213941
	I1206 10:45:06.014768  399286 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:45:06.023097  399286 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364855.pem
	I1206 10:45:06.030984  399286 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364855.pem /etc/ssl/certs/364855.pem
	I1206 10:45:06.039316  399286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364855.pem
	I1206 10:45:06.043457  399286 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  6 10:36 /usr/share/ca-certificates/364855.pem
	I1206 10:45:06.043549  399286 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:36 /usr/share/ca-certificates/364855.pem
	I1206 10:45:06.043624  399286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364855.pem
	I1206 10:45:06.084760  399286 command_runner.go:130] > 51391683
	I1206 10:45:06.084914  399286 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 10:45:06.092772  399286 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648552.pem
	I1206 10:45:06.100248  399286 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648552.pem /etc/ssl/certs/3648552.pem
	I1206 10:45:06.107970  399286 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648552.pem
	I1206 10:45:06.112031  399286 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  6 10:36 /usr/share/ca-certificates/3648552.pem
	I1206 10:45:06.112134  399286 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:36 /usr/share/ca-certificates/3648552.pem
	I1206 10:45:06.112229  399286 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648552.pem
	I1206 10:45:06.152822  399286 command_runner.go:130] > 3ec20f2e
	I1206 10:45:06.153315  399286 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 10:45:06.161105  399286 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:45:06.165043  399286 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:45:06.165068  399286 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1206 10:45:06.165075  399286 command_runner.go:130] > Device: 259,1	Inode: 1826360     Links: 1
	I1206 10:45:06.165081  399286 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1206 10:45:06.165087  399286 command_runner.go:130] > Access: 2025-12-06 10:40:58.003190996 +0000
	I1206 10:45:06.165092  399286 command_runner.go:130] > Modify: 2025-12-06 10:36:53.916464205 +0000
	I1206 10:45:06.165098  399286 command_runner.go:130] > Change: 2025-12-06 10:36:53.916464205 +0000
	I1206 10:45:06.165103  399286 command_runner.go:130] >  Birth: 2025-12-06 10:36:53.916464205 +0000
	I1206 10:45:06.165195  399286 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 10:45:06.207365  399286 command_runner.go:130] > Certificate will not expire
	I1206 10:45:06.207850  399286 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 10:45:06.248448  399286 command_runner.go:130] > Certificate will not expire
	I1206 10:45:06.248932  399286 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 10:45:06.289656  399286 command_runner.go:130] > Certificate will not expire
	I1206 10:45:06.290116  399286 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 10:45:06.330828  399286 command_runner.go:130] > Certificate will not expire
	I1206 10:45:06.331412  399286 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 10:45:06.372096  399286 command_runner.go:130] > Certificate will not expire
	I1206 10:45:06.372595  399286 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 10:45:06.413596  399286 command_runner.go:130] > Certificate will not expire
	I1206 10:45:06.414056  399286 kubeadm.go:401] StartCluster: {Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:45:06.414151  399286 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:45:06.414217  399286 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:45:06.442677  399286 cri.go:89] found id: ""
	I1206 10:45:06.442751  399286 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:45:06.449938  399286 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1206 10:45:06.449962  399286 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1206 10:45:06.449969  399286 command_runner.go:130] > /var/lib/minikube/etcd:
	I1206 10:45:06.450931  399286 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 10:45:06.450952  399286 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 10:45:06.451032  399286 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 10:45:06.459080  399286 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:45:06.459618  399286 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-196950" does not appear in /home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 10:45:06.459742  399286 kubeconfig.go:62] /home/jenkins/minikube-integration/22047-362985/kubeconfig needs updating (will repair): [kubeconfig missing "functional-196950" cluster setting kubeconfig missing "functional-196950" context setting]
	I1206 10:45:06.460016  399286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/kubeconfig: {Name:mk779651834cfbdc6f0b5e8f5a9abc0f05106181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:45:06.460484  399286 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 10:45:06.460638  399286 kapi.go:59] client config for functional-196950: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.crt", KeyFile:"/home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.key", CAFile:"/home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 10:45:06.461238  399286 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1206 10:45:06.461268  399286 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1206 10:45:06.461280  399286 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1206 10:45:06.461291  399286 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1206 10:45:06.461295  399286 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1206 10:45:06.461337  399286 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1206 10:45:06.461637  399286 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 10:45:06.473548  399286 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1206 10:45:06.473584  399286 kubeadm.go:602] duration metric: took 22.626231ms to restartPrimaryControlPlane
	I1206 10:45:06.473594  399286 kubeadm.go:403] duration metric: took 59.544914ms to StartCluster
	I1206 10:45:06.473609  399286 settings.go:142] acquiring lock: {Name:mk789e01bfd4ab9fa1e2a8415fa99b570b26926a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:45:06.473671  399286 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 10:45:06.474312  399286 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/kubeconfig: {Name:mk779651834cfbdc6f0b5e8f5a9abc0f05106181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:45:06.474518  399286 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 10:45:06.474963  399286 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 10:45:06.475042  399286 addons.go:70] Setting storage-provisioner=true in profile "functional-196950"
	I1206 10:45:06.475066  399286 addons.go:239] Setting addon storage-provisioner=true in "functional-196950"
	I1206 10:45:06.475092  399286 host.go:66] Checking if "functional-196950" exists ...
	I1206 10:45:06.475912  399286 cli_runner.go:164] Run: docker container inspect functional-196950 --format={{.State.Status}}
	I1206 10:45:06.476264  399286 config.go:182] Loaded profile config "functional-196950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:45:06.476354  399286 addons.go:70] Setting default-storageclass=true in profile "functional-196950"
	I1206 10:45:06.476394  399286 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-196950"
	I1206 10:45:06.476791  399286 cli_runner.go:164] Run: docker container inspect functional-196950 --format={{.State.Status}}
	I1206 10:45:06.481213  399286 out.go:179] * Verifying Kubernetes components...
	I1206 10:45:06.484465  399286 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:45:06.517764  399286 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 10:45:06.517930  399286 kapi.go:59] client config for functional-196950: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.crt", KeyFile:"/home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.key", CAFile:"/home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 10:45:06.518202  399286 addons.go:239] Setting addon default-storageclass=true in "functional-196950"
	I1206 10:45:06.518232  399286 host.go:66] Checking if "functional-196950" exists ...
	I1206 10:45:06.518684  399286 cli_runner.go:164] Run: docker container inspect functional-196950 --format={{.State.Status}}
	I1206 10:45:06.522254  399286 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1206 10:45:06.525206  399286 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:06.525232  399286 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1206 10:45:06.525299  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:06.551517  399286 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:06.551540  399286 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1206 10:45:06.551605  399286 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:45:06.570954  399286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:45:06.593327  399286 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:45:06.685314  399286 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:45:06.722168  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:06.737572  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:07.472063  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:07.472098  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:07.472124  399286 retry.go:31] will retry after 153.213078ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:07.472168  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:07.472179  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:07.472186  399286 retry.go:31] will retry after 247.840204ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:07.472279  399286 node_ready.go:35] waiting up to 6m0s for node "functional-196950" to be "Ready" ...
	I1206 10:45:07.472418  399286 type.go:168] "Request Body" body=""
	I1206 10:45:07.472509  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:07.472828  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:07.626184  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:07.684274  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:07.688010  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:07.688045  399286 retry.go:31] will retry after 503.005947ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:07.720209  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:07.781565  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:07.785057  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:07.785089  399286 retry.go:31] will retry after 443.254463ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:07.973439  399286 type.go:168] "Request Body" body=""
	I1206 10:45:07.973529  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:07.974023  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:08.191658  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:08.229200  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:08.274450  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:08.282645  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:08.282730  399286 retry.go:31] will retry after 342.048952ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:08.327096  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:08.327147  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:08.327166  399286 retry.go:31] will retry after 504.811759ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:08.473470  399286 type.go:168] "Request Body" body=""
	I1206 10:45:08.473573  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:08.473913  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:08.625427  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:08.684176  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:08.687968  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:08.688010  399286 retry.go:31] will retry after 1.261411242s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:08.832256  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:08.891180  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:08.894801  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:08.894836  399286 retry.go:31] will retry after 546.340513ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:08.973077  399286 type.go:168] "Request Body" body=""
	I1206 10:45:08.973155  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:08.973522  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:09.442273  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:09.472729  399286 type.go:168] "Request Body" body=""
	I1206 10:45:09.472803  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:09.473092  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:09.473139  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:09.506571  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:09.510870  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:09.510955  399286 retry.go:31] will retry after 985.837399ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:09.950606  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:09.973212  399286 type.go:168] "Request Body" body=""
	I1206 10:45:09.973298  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:09.973577  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:10.030286  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:10.030402  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:10.030452  399286 retry.go:31] will retry after 829.97822ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:10.472519  399286 type.go:168] "Request Body" body=""
	I1206 10:45:10.472588  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:10.472971  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:10.497156  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:10.582698  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:10.582757  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:10.582779  399286 retry.go:31] will retry after 2.303396874s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:10.861265  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:10.923027  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:10.923124  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:10.923150  399286 retry.go:31] will retry after 2.722563752s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:10.973315  399286 type.go:168] "Request Body" body=""
	I1206 10:45:10.973396  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:10.973700  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:11.473530  399286 type.go:168] "Request Body" body=""
	I1206 10:45:11.473608  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:11.474011  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:11.474073  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:11.972906  399286 type.go:168] "Request Body" body=""
	I1206 10:45:11.972979  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:11.973246  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:12.472617  399286 type.go:168] "Request Body" body=""
	I1206 10:45:12.472696  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:12.473071  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:12.886451  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:12.946418  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:12.951114  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:12.951151  399286 retry.go:31] will retry after 2.435253477s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:12.973196  399286 type.go:168] "Request Body" body=""
	I1206 10:45:12.973267  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:12.973628  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:13.473384  399286 type.go:168] "Request Body" body=""
	I1206 10:45:13.473455  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:13.473719  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:13.646250  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:13.707346  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:13.707418  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:13.707442  399286 retry.go:31] will retry after 2.81497333s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:13.972564  399286 type.go:168] "Request Body" body=""
	I1206 10:45:13.972648  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:13.972980  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:13.973040  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:14.472608  399286 type.go:168] "Request Body" body=""
	I1206 10:45:14.472684  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:14.473066  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:14.972534  399286 type.go:168] "Request Body" body=""
	I1206 10:45:14.972625  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:14.972955  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:15.386668  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:15.447515  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:15.447555  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:15.447573  399286 retry.go:31] will retry after 2.327509257s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:15.472847  399286 type.go:168] "Request Body" body=""
	I1206 10:45:15.472922  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:15.473272  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:15.973226  399286 type.go:168] "Request Body" body=""
	I1206 10:45:15.973305  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:15.973654  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:15.973708  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:16.473465  399286 type.go:168] "Request Body" body=""
	I1206 10:45:16.473539  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:16.473810  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:16.523188  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:16.580568  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:16.584128  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:16.584161  399286 retry.go:31] will retry after 3.565207529s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:16.972816  399286 type.go:168] "Request Body" body=""
	I1206 10:45:16.972893  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:16.973236  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:17.472948  399286 type.go:168] "Request Body" body=""
	I1206 10:45:17.473028  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:17.473355  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:17.775942  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:17.833742  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:17.838032  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:17.838073  399286 retry.go:31] will retry after 9.046125485s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:17.973259  399286 type.go:168] "Request Body" body=""
	I1206 10:45:17.973333  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:17.973605  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:18.473464  399286 type.go:168] "Request Body" body=""
	I1206 10:45:18.473544  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:18.473887  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:18.473936  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:18.972571  399286 type.go:168] "Request Body" body=""
	I1206 10:45:18.972668  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:18.973005  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:19.472497  399286 type.go:168] "Request Body" body=""
	I1206 10:45:19.472590  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:19.472870  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:19.972598  399286 type.go:168] "Request Body" body=""
	I1206 10:45:19.972674  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:19.972970  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:20.150467  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:20.215833  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:20.215885  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:20.215905  399286 retry.go:31] will retry after 9.222024728s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:20.473247  399286 type.go:168] "Request Body" body=""
	I1206 10:45:20.473322  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:20.473670  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:20.973445  399286 type.go:168] "Request Body" body=""
	I1206 10:45:20.973528  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:20.973801  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:20.973861  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:21.472555  399286 type.go:168] "Request Body" body=""
	I1206 10:45:21.472664  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:21.473020  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:21.972799  399286 type.go:168] "Request Body" body=""
	I1206 10:45:21.972877  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:21.973219  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:22.472497  399286 type.go:168] "Request Body" body=""
	I1206 10:45:22.472576  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:22.472904  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:22.972589  399286 type.go:168] "Request Body" body=""
	I1206 10:45:22.972674  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:22.973015  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:23.472753  399286 type.go:168] "Request Body" body=""
	I1206 10:45:23.472835  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:23.473181  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:23.473243  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:23.972733  399286 type.go:168] "Request Body" body=""
	I1206 10:45:23.972804  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:23.973079  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:24.472742  399286 type.go:168] "Request Body" body=""
	I1206 10:45:24.472825  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:24.473193  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:24.972805  399286 type.go:168] "Request Body" body=""
	I1206 10:45:24.972890  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:24.973299  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:25.473054  399286 type.go:168] "Request Body" body=""
	I1206 10:45:25.473127  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:25.473403  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:25.473453  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:25.972761  399286 type.go:168] "Request Body" body=""
	I1206 10:45:25.972834  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:25.973177  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:26.473056  399286 type.go:168] "Request Body" body=""
	I1206 10:45:26.473132  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:26.473476  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:26.884353  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:26.943184  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:26.947029  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:26.947062  399286 retry.go:31] will retry after 13.756266916s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:26.973239  399286 type.go:168] "Request Body" body=""
	I1206 10:45:26.973309  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:26.973589  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:27.473507  399286 type.go:168] "Request Body" body=""
	I1206 10:45:27.473585  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:27.473949  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:27.474006  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:27.972689  399286 type.go:168] "Request Body" body=""
	I1206 10:45:27.972763  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:27.973145  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:28.472835  399286 type.go:168] "Request Body" body=""
	I1206 10:45:28.472909  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:28.473194  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:28.972604  399286 type.go:168] "Request Body" body=""
	I1206 10:45:28.972682  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:28.972972  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:29.438741  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:29.473252  399286 type.go:168] "Request Body" body=""
	I1206 10:45:29.473342  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:29.473619  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:29.500011  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:29.500052  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:29.500073  399286 retry.go:31] will retry after 11.458105653s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:29.972514  399286 type.go:168] "Request Body" body=""
	I1206 10:45:29.972601  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:29.972925  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:29.972975  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:30.472573  399286 type.go:168] "Request Body" body=""
	I1206 10:45:30.472647  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:30.472967  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:30.972595  399286 type.go:168] "Request Body" body=""
	I1206 10:45:30.972703  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:30.973084  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:31.472784  399286 type.go:168] "Request Body" body=""
	I1206 10:45:31.472855  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:31.473199  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:31.972958  399286 type.go:168] "Request Body" body=""
	I1206 10:45:31.973040  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:31.973376  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:31.973432  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:32.473378  399286 type.go:168] "Request Body" body=""
	I1206 10:45:32.473454  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:32.473784  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:32.972445  399286 type.go:168] "Request Body" body=""
	I1206 10:45:32.972534  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:32.972822  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:33.472492  399286 type.go:168] "Request Body" body=""
	I1206 10:45:33.472570  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:33.472871  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:33.972494  399286 type.go:168] "Request Body" body=""
	I1206 10:45:33.972591  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:33.972945  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:34.472578  399286 type.go:168] "Request Body" body=""
	I1206 10:45:34.472650  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:34.473009  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:34.473064  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:34.972732  399286 type.go:168] "Request Body" body=""
	I1206 10:45:34.972808  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:34.973199  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:35.472761  399286 type.go:168] "Request Body" body=""
	I1206 10:45:35.472857  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:35.473192  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:35.972534  399286 type.go:168] "Request Body" body=""
	I1206 10:45:35.972619  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:35.972903  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:36.472813  399286 type.go:168] "Request Body" body=""
	I1206 10:45:36.472898  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:36.473245  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:36.473300  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:36.972930  399286 type.go:168] "Request Body" body=""
	I1206 10:45:36.973016  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:36.973389  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:37.473181  399286 type.go:168] "Request Body" body=""
	I1206 10:45:37.473253  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:37.473531  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:37.973319  399286 type.go:168] "Request Body" body=""
	I1206 10:45:37.973403  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:37.973730  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:38.472498  399286 type.go:168] "Request Body" body=""
	I1206 10:45:38.472583  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:38.472928  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:38.972624  399286 type.go:168] "Request Body" body=""
	I1206 10:45:38.972703  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:38.973126  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:38.973176  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:39.472568  399286 type.go:168] "Request Body" body=""
	I1206 10:45:39.472665  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:39.472987  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:39.972728  399286 type.go:168] "Request Body" body=""
	I1206 10:45:39.972805  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:39.973175  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:40.473384  399286 type.go:168] "Request Body" body=""
	I1206 10:45:40.473456  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:40.473714  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:40.704276  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:40.766032  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:40.766082  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:40.766102  399286 retry.go:31] will retry after 12.834175432s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:40.958402  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:40.972905  399286 type.go:168] "Request Body" body=""
	I1206 10:45:40.972992  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:40.973301  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:40.973353  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:41.030830  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:41.030878  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:41.030900  399286 retry.go:31] will retry after 14.333484689s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:41.472501  399286 type.go:168] "Request Body" body=""
	I1206 10:45:41.472600  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:41.472944  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:41.972853  399286 type.go:168] "Request Body" body=""
	I1206 10:45:41.972920  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:41.973187  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:42.472557  399286 type.go:168] "Request Body" body=""
	I1206 10:45:42.472636  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:42.472968  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:42.972558  399286 type.go:168] "Request Body" body=""
	I1206 10:45:42.972635  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:42.972937  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:43.472504  399286 type.go:168] "Request Body" body=""
	I1206 10:45:43.472579  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:43.472849  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:43.472893  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:43.972555  399286 type.go:168] "Request Body" body=""
	I1206 10:45:43.972629  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:43.972940  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:44.472608  399286 type.go:168] "Request Body" body=""
	I1206 10:45:44.472707  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:44.473088  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:44.972724  399286 type.go:168] "Request Body" body=""
	I1206 10:45:44.972794  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:44.973077  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:45.472782  399286 type.go:168] "Request Body" body=""
	I1206 10:45:45.472865  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:45.473241  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:45.473304  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:45.972826  399286 type.go:168] "Request Body" body=""
	I1206 10:45:45.972906  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:45.973262  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:46.473108  399286 type.go:168] "Request Body" body=""
	I1206 10:45:46.473196  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:46.473467  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:46.973436  399286 type.go:168] "Request Body" body=""
	I1206 10:45:46.973508  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:46.973863  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:47.472551  399286 type.go:168] "Request Body" body=""
	I1206 10:45:47.472626  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:47.472969  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:47.972653  399286 type.go:168] "Request Body" body=""
	I1206 10:45:47.972724  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:47.972985  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:47.973026  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:48.472555  399286 type.go:168] "Request Body" body=""
	I1206 10:45:48.472631  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:48.472979  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:48.972567  399286 type.go:168] "Request Body" body=""
	I1206 10:45:48.972648  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:48.973011  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:49.472610  399286 type.go:168] "Request Body" body=""
	I1206 10:45:49.472682  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:49.473011  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:49.972715  399286 type.go:168] "Request Body" body=""
	I1206 10:45:49.972814  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:49.973135  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:49.973192  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:50.472573  399286 type.go:168] "Request Body" body=""
	I1206 10:45:50.472649  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:50.473004  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:50.972718  399286 type.go:168] "Request Body" body=""
	I1206 10:45:50.972788  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:50.973064  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:51.472737  399286 type.go:168] "Request Body" body=""
	I1206 10:45:51.472812  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:51.473132  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:51.972870  399286 type.go:168] "Request Body" body=""
	I1206 10:45:51.972960  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:51.973314  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:51.973366  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:52.472505  399286 type.go:168] "Request Body" body=""
	I1206 10:45:52.472573  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:52.472847  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:52.972578  399286 type.go:168] "Request Body" body=""
	I1206 10:45:52.972662  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:52.973040  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:53.472618  399286 type.go:168] "Request Body" body=""
	I1206 10:45:53.472697  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:53.473027  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:53.600459  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:45:53.661736  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:53.665292  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:53.665323  399286 retry.go:31] will retry after 22.486760262s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:53.972617  399286 type.go:168] "Request Body" body=""
	I1206 10:45:53.972697  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:53.972964  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:54.472589  399286 type.go:168] "Request Body" body=""
	I1206 10:45:54.472671  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:54.473035  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:54.473093  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:54.972750  399286 type.go:168] "Request Body" body=""
	I1206 10:45:54.972837  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:54.973175  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:55.364722  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:45:55.425632  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:45:55.425678  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:55.425713  399286 retry.go:31] will retry after 12.507538253s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:45:55.472809  399286 type.go:168] "Request Body" body=""
	I1206 10:45:55.472887  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:55.473184  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:55.972552  399286 type.go:168] "Request Body" body=""
	I1206 10:45:55.972650  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:55.972997  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:56.472967  399286 type.go:168] "Request Body" body=""
	I1206 10:45:56.473058  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:56.473382  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:56.473432  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:56.973296  399286 type.go:168] "Request Body" body=""
	I1206 10:45:56.973367  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:56.973664  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:57.473479  399286 type.go:168] "Request Body" body=""
	I1206 10:45:57.473548  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:57.473911  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:57.972639  399286 type.go:168] "Request Body" body=""
	I1206 10:45:57.972714  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:57.973013  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:58.472518  399286 type.go:168] "Request Body" body=""
	I1206 10:45:58.472589  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:58.472883  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:58.972593  399286 type.go:168] "Request Body" body=""
	I1206 10:45:58.972667  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:58.973050  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:45:58.973107  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:45:59.472758  399286 type.go:168] "Request Body" body=""
	I1206 10:45:59.472833  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:59.473125  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:45:59.972559  399286 type.go:168] "Request Body" body=""
	I1206 10:45:59.972680  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:45:59.973026  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:00.472759  399286 type.go:168] "Request Body" body=""
	I1206 10:46:00.472862  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:00.473243  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:00.972535  399286 type.go:168] "Request Body" body=""
	I1206 10:46:00.972611  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:00.972922  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:01.472611  399286 type.go:168] "Request Body" body=""
	I1206 10:46:01.472687  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:01.473449  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:01.473514  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:01.973455  399286 type.go:168] "Request Body" body=""
	I1206 10:46:01.973535  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:01.973907  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:02.472594  399286 type.go:168] "Request Body" body=""
	I1206 10:46:02.472664  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:02.472938  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:02.972637  399286 type.go:168] "Request Body" body=""
	I1206 10:46:02.972721  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:02.973106  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:03.472580  399286 type.go:168] "Request Body" body=""
	I1206 10:46:03.472660  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:03.473047  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:03.972706  399286 type.go:168] "Request Body" body=""
	I1206 10:46:03.972777  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:03.973074  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:03.973125  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:04.472781  399286 type.go:168] "Request Body" body=""
	I1206 10:46:04.472867  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:04.473199  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:04.972938  399286 type.go:168] "Request Body" body=""
	I1206 10:46:04.973022  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:04.973345  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:05.473133  399286 type.go:168] "Request Body" body=""
	I1206 10:46:05.473203  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:05.473463  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:05.973200  399286 type.go:168] "Request Body" body=""
	I1206 10:46:05.973298  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:05.973625  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:05.973682  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:06.472493  399286 type.go:168] "Request Body" body=""
	I1206 10:46:06.472614  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:06.473110  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:06.972852  399286 type.go:168] "Request Body" body=""
	I1206 10:46:06.972929  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:06.973194  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:07.472870  399286 type.go:168] "Request Body" body=""
	I1206 10:46:07.472948  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:07.473250  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:07.933511  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:46:07.973023  399286 type.go:168] "Request Body" body=""
	I1206 10:46:07.973096  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:07.973373  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:07.994514  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:46:07.994569  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:46:07.994603  399286 retry.go:31] will retry after 24.706041433s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:46:08.473166  399286 type.go:168] "Request Body" body=""
	I1206 10:46:08.473240  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:08.473542  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:08.473592  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:08.973437  399286 type.go:168] "Request Body" body=""
	I1206 10:46:08.973586  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:08.973915  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:09.472565  399286 type.go:168] "Request Body" body=""
	I1206 10:46:09.472644  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:09.472997  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:09.972519  399286 type.go:168] "Request Body" body=""
	I1206 10:46:09.972596  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:09.972890  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:10.472617  399286 type.go:168] "Request Body" body=""
	I1206 10:46:10.472695  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:10.473054  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:10.972589  399286 type.go:168] "Request Body" body=""
	I1206 10:46:10.972671  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:10.973007  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:10.973060  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:11.472630  399286 type.go:168] "Request Body" body=""
	I1206 10:46:11.472699  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:11.473093  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:11.972992  399286 type.go:168] "Request Body" body=""
	I1206 10:46:11.973065  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:11.973384  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:12.472990  399286 type.go:168] "Request Body" body=""
	I1206 10:46:12.473133  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:12.473477  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:12.973203  399286 type.go:168] "Request Body" body=""
	I1206 10:46:12.973288  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:12.973547  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:12.973597  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:13.473422  399286 type.go:168] "Request Body" body=""
	I1206 10:46:13.473507  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:13.473830  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:13.972530  399286 type.go:168] "Request Body" body=""
	I1206 10:46:13.972634  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:13.972982  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:14.472697  399286 type.go:168] "Request Body" body=""
	I1206 10:46:14.472780  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:14.473132  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:14.972546  399286 type.go:168] "Request Body" body=""
	I1206 10:46:14.972620  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:14.972954  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:15.472543  399286 type.go:168] "Request Body" body=""
	I1206 10:46:15.472620  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:15.472950  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:15.473077  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:15.972516  399286 type.go:168] "Request Body" body=""
	I1206 10:46:15.972589  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:15.972880  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:16.153289  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:46:16.211194  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:46:16.214959  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:46:16.214991  399286 retry.go:31] will retry after 16.737835039s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:46:16.473494  399286 type.go:168] "Request Body" body=""
	I1206 10:46:16.473573  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:16.473903  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:16.972909  399286 type.go:168] "Request Body" body=""
	I1206 10:46:16.972986  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:16.973336  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:17.473112  399286 type.go:168] "Request Body" body=""
	I1206 10:46:17.473189  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:17.473465  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:17.473508  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:17.973266  399286 type.go:168] "Request Body" body=""
	I1206 10:46:17.973344  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:17.973710  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:18.472499  399286 type.go:168] "Request Body" body=""
	I1206 10:46:18.472586  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:18.472953  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:18.972650  399286 type.go:168] "Request Body" body=""
	I1206 10:46:18.972719  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:18.973068  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:19.472565  399286 type.go:168] "Request Body" body=""
	I1206 10:46:19.472638  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:19.472948  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:19.972573  399286 type.go:168] "Request Body" body=""
	I1206 10:46:19.972649  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:19.972985  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:19.973044  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:20.472488  399286 type.go:168] "Request Body" body=""
	I1206 10:46:20.472560  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:20.472892  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:20.972569  399286 type.go:168] "Request Body" body=""
	I1206 10:46:20.972648  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:20.973000  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:21.472659  399286 type.go:168] "Request Body" body=""
	I1206 10:46:21.472741  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:21.473075  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:21.972911  399286 type.go:168] "Request Body" body=""
	I1206 10:46:21.972985  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:21.973292  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:21.973342  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:22.473039  399286 type.go:168] "Request Body" body=""
	I1206 10:46:22.473118  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:22.473451  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:22.973318  399286 type.go:168] "Request Body" body=""
	I1206 10:46:22.973392  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:22.973733  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:23.472438  399286 type.go:168] "Request Body" body=""
	I1206 10:46:23.472509  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:23.472819  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:23.972535  399286 type.go:168] "Request Body" body=""
	I1206 10:46:23.972611  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:23.972929  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:24.472539  399286 type.go:168] "Request Body" body=""
	I1206 10:46:24.472621  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:24.472971  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:24.473042  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:24.972529  399286 type.go:168] "Request Body" body=""
	I1206 10:46:24.972598  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:24.972883  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:25.472559  399286 type.go:168] "Request Body" body=""
	I1206 10:46:25.472685  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:25.473033  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:25.972748  399286 type.go:168] "Request Body" body=""
	I1206 10:46:25.972833  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:25.973182  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:26.472983  399286 type.go:168] "Request Body" body=""
	I1206 10:46:26.473065  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:26.473364  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:26.473432  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:26.973366  399286 type.go:168] "Request Body" body=""
	I1206 10:46:26.973451  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:26.973797  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:27.472534  399286 type.go:168] "Request Body" body=""
	I1206 10:46:27.472617  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:27.472962  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:27.972662  399286 type.go:168] "Request Body" body=""
	I1206 10:46:27.972735  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:27.973174  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:28.472540  399286 type.go:168] "Request Body" body=""
	I1206 10:46:28.472653  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:28.472975  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:28.972588  399286 type.go:168] "Request Body" body=""
	I1206 10:46:28.972684  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:28.973062  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:28.973117  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:29.472612  399286 type.go:168] "Request Body" body=""
	I1206 10:46:29.472691  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:29.473027  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:29.972588  399286 type.go:168] "Request Body" body=""
	I1206 10:46:29.972665  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:29.973045  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:30.472639  399286 type.go:168] "Request Body" body=""
	I1206 10:46:30.472714  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:30.473023  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:30.972504  399286 type.go:168] "Request Body" body=""
	I1206 10:46:30.972575  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:30.972851  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:31.472534  399286 type.go:168] "Request Body" body=""
	I1206 10:46:31.472619  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:31.472983  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:31.473045  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:31.973254  399286 type.go:168] "Request Body" body=""
	I1206 10:46:31.973345  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:31.973713  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:32.472451  399286 type.go:168] "Request Body" body=""
	I1206 10:46:32.472529  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:32.472822  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:32.701368  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1206 10:46:32.764470  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:46:32.764520  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:46:32.764620  399286 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 10:46:32.953898  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:46:32.973480  399286 type.go:168] "Request Body" body=""
	I1206 10:46:32.973551  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:32.973819  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:33.013712  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:46:33.017430  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:46:33.017463  399286 retry.go:31] will retry after 30.205234164s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1206 10:46:33.472638  399286 type.go:168] "Request Body" body=""
	I1206 10:46:33.472723  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:33.473069  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:33.473124  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:33.972761  399286 type.go:168] "Request Body" body=""
	I1206 10:46:33.972847  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:33.973196  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:34.472595  399286 type.go:168] "Request Body" body=""
	I1206 10:46:34.472673  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:34.473015  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:34.972624  399286 type.go:168] "Request Body" body=""
	I1206 10:46:34.972712  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:34.973072  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:35.472562  399286 type.go:168] "Request Body" body=""
	I1206 10:46:35.472631  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:35.472898  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:35.972594  399286 type.go:168] "Request Body" body=""
	I1206 10:46:35.972691  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:35.973118  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:35.973178  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:36.472537  399286 type.go:168] "Request Body" body=""
	I1206 10:46:36.472612  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:36.472993  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:36.972887  399286 type.go:168] "Request Body" body=""
	I1206 10:46:36.972979  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:36.973257  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:37.472570  399286 type.go:168] "Request Body" body=""
	I1206 10:46:37.472647  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:37.473006  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:37.972720  399286 type.go:168] "Request Body" body=""
	I1206 10:46:37.972795  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:37.973159  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:37.973230  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:38.472827  399286 type.go:168] "Request Body" body=""
	I1206 10:46:38.472914  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:38.473245  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:38.972991  399286 type.go:168] "Request Body" body=""
	I1206 10:46:38.973109  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:38.973430  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:39.473083  399286 type.go:168] "Request Body" body=""
	I1206 10:46:39.473156  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:39.473487  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:39.973120  399286 type.go:168] "Request Body" body=""
	I1206 10:46:39.973195  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:39.973475  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:39.973520  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:40.473349  399286 type.go:168] "Request Body" body=""
	I1206 10:46:40.473426  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:40.473832  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:40.972536  399286 type.go:168] "Request Body" body=""
	I1206 10:46:40.972611  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:40.972967  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:41.472528  399286 type.go:168] "Request Body" body=""
	I1206 10:46:41.472606  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:41.472897  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:41.972839  399286 type.go:168] "Request Body" body=""
	I1206 10:46:41.972921  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:41.973277  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:42.473121  399286 type.go:168] "Request Body" body=""
	I1206 10:46:42.473205  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:42.473535  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:42.473598  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:42.973295  399286 type.go:168] "Request Body" body=""
	I1206 10:46:42.973369  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:42.973633  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:43.474561  399286 type.go:168] "Request Body" body=""
	I1206 10:46:43.474632  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:43.474989  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:43.972751  399286 type.go:168] "Request Body" body=""
	I1206 10:46:43.972830  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:43.973164  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:44.472525  399286 type.go:168] "Request Body" body=""
	I1206 10:46:44.472603  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:44.472924  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:44.972616  399286 type.go:168] "Request Body" body=""
	I1206 10:46:44.972690  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:44.972993  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:44.973041  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:45.472572  399286 type.go:168] "Request Body" body=""
	I1206 10:46:45.472654  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:45.473032  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:45.972746  399286 type.go:168] "Request Body" body=""
	I1206 10:46:45.972818  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:45.973081  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:46.473093  399286 type.go:168] "Request Body" body=""
	I1206 10:46:46.473169  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:46.473502  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:46.972476  399286 type.go:168] "Request Body" body=""
	I1206 10:46:46.972548  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:46.972884  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:47.472504  399286 type.go:168] "Request Body" body=""
	I1206 10:46:47.472574  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:47.472853  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:47.472901  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:47.972549  399286 type.go:168] "Request Body" body=""
	I1206 10:46:47.972622  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:47.972948  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:48.472667  399286 type.go:168] "Request Body" body=""
	I1206 10:46:48.472745  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:48.473110  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:48.972503  399286 type.go:168] "Request Body" body=""
	I1206 10:46:48.972577  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:48.972841  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:49.472552  399286 type.go:168] "Request Body" body=""
	I1206 10:46:49.472628  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:49.472955  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:49.473012  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:49.972575  399286 type.go:168] "Request Body" body=""
	I1206 10:46:49.972653  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:49.972977  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:50.472510  399286 type.go:168] "Request Body" body=""
	I1206 10:46:50.472585  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:50.472943  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:50.972627  399286 type.go:168] "Request Body" body=""
	I1206 10:46:50.972741  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:50.973101  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:51.472815  399286 type.go:168] "Request Body" body=""
	I1206 10:46:51.472914  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:51.473280  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:51.473354  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:51.973315  399286 type.go:168] "Request Body" body=""
	I1206 10:46:51.973390  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:51.973667  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:52.473496  399286 type.go:168] "Request Body" body=""
	I1206 10:46:52.473597  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:52.473928  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:52.972621  399286 type.go:168] "Request Body" body=""
	I1206 10:46:52.972697  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:52.973027  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:53.472511  399286 type.go:168] "Request Body" body=""
	I1206 10:46:53.472581  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:53.472850  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:53.972553  399286 type.go:168] "Request Body" body=""
	I1206 10:46:53.972631  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:53.973006  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:53.973079  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:54.472752  399286 type.go:168] "Request Body" body=""
	I1206 10:46:54.472832  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:54.473199  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:54.972887  399286 type.go:168] "Request Body" body=""
	I1206 10:46:54.972975  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:54.973260  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:55.472573  399286 type.go:168] "Request Body" body=""
	I1206 10:46:55.472649  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:55.473014  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:55.972568  399286 type.go:168] "Request Body" body=""
	I1206 10:46:55.972665  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:55.973053  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:55.973130  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:56.472795  399286 type.go:168] "Request Body" body=""
	I1206 10:46:56.472877  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:56.473146  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:56.972884  399286 type.go:168] "Request Body" body=""
	I1206 10:46:56.972967  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:56.973286  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:57.473158  399286 type.go:168] "Request Body" body=""
	I1206 10:46:57.473263  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:57.473709  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:57.973446  399286 type.go:168] "Request Body" body=""
	I1206 10:46:57.973526  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:57.973793  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:46:57.973842  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:46:58.472543  399286 type.go:168] "Request Body" body=""
	I1206 10:46:58.472635  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:58.473049  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:58.972820  399286 type.go:168] "Request Body" body=""
	I1206 10:46:58.972904  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:58.973235  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:59.472499  399286 type.go:168] "Request Body" body=""
	I1206 10:46:59.472588  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:59.472924  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:46:59.972544  399286 type.go:168] "Request Body" body=""
	I1206 10:46:59.972618  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:46:59.972934  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:00.472668  399286 type.go:168] "Request Body" body=""
	I1206 10:47:00.472777  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:00.473113  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:00.473167  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:00.972603  399286 type.go:168] "Request Body" body=""
	I1206 10:47:00.972685  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:00.973038  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:01.472631  399286 type.go:168] "Request Body" body=""
	I1206 10:47:01.472707  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:01.473290  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:01.973256  399286 type.go:168] "Request Body" body=""
	I1206 10:47:01.973331  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:01.973589  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:02.473484  399286 type.go:168] "Request Body" body=""
	I1206 10:47:02.473561  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:02.473896  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:02.473948  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:02.972573  399286 type.go:168] "Request Body" body=""
	I1206 10:47:02.972648  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:02.972960  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:03.223490  399286 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1206 10:47:03.285940  399286 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:47:03.285995  399286 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1206 10:47:03.286078  399286 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1206 10:47:03.289299  399286 out.go:179] * Enabled addons: 
	I1206 10:47:03.293166  399286 addons.go:530] duration metric: took 1m56.818196786s for enable addons: enabled=[]
	I1206 10:47:03.473269  399286 type.go:168] "Request Body" body=""
	I1206 10:47:03.473338  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:03.473598  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:03.973428  399286 type.go:168] "Request Body" body=""
	I1206 10:47:03.973501  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:03.973827  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:04.472627  399286 type.go:168] "Request Body" body=""
	I1206 10:47:04.472722  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:04.473116  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:04.972512  399286 type.go:168] "Request Body" body=""
	I1206 10:47:04.972582  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:04.972864  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:04.972905  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:05.472854  399286 type.go:168] "Request Body" body=""
	I1206 10:47:05.472960  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:05.473416  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:05.973522  399286 type.go:168] "Request Body" body=""
	I1206 10:47:05.973609  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:05.973972  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:06.472918  399286 type.go:168] "Request Body" body=""
	I1206 10:47:06.472988  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:06.473277  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:06.973211  399286 type.go:168] "Request Body" body=""
	I1206 10:47:06.973295  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:06.973657  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:06.973714  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:07.473520  399286 type.go:168] "Request Body" body=""
	I1206 10:47:07.473603  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:07.473953  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:07.972629  399286 type.go:168] "Request Body" body=""
	I1206 10:47:07.972706  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:07.973057  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:08.472555  399286 type.go:168] "Request Body" body=""
	I1206 10:47:08.472632  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:08.472984  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:08.972724  399286 type.go:168] "Request Body" body=""
	I1206 10:47:08.972820  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:08.973196  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:09.472563  399286 type.go:168] "Request Body" body=""
	I1206 10:47:09.472642  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:09.472956  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:09.473017  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:09.972573  399286 type.go:168] "Request Body" body=""
	I1206 10:47:09.972651  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:09.973014  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:10.472724  399286 type.go:168] "Request Body" body=""
	I1206 10:47:10.472800  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:10.473133  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:10.972508  399286 type.go:168] "Request Body" body=""
	I1206 10:47:10.972579  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:10.972853  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:11.472590  399286 type.go:168] "Request Body" body=""
	I1206 10:47:11.472669  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:11.473026  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:11.473100  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:11.972903  399286 type.go:168] "Request Body" body=""
	I1206 10:47:11.972976  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:11.973268  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:12.472510  399286 type.go:168] "Request Body" body=""
	I1206 10:47:12.472587  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:12.472925  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:12.972540  399286 type.go:168] "Request Body" body=""
	I1206 10:47:12.972620  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:12.972953  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:13.472625  399286 type.go:168] "Request Body" body=""
	I1206 10:47:13.472698  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:13.473024  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:13.972681  399286 type.go:168] "Request Body" body=""
	I1206 10:47:13.972766  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:13.973081  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:13.973132  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:14.472631  399286 type.go:168] "Request Body" body=""
	I1206 10:47:14.472714  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:14.472985  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:14.972530  399286 type.go:168] "Request Body" body=""
	I1206 10:47:14.972629  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:14.972947  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:15.472648  399286 type.go:168] "Request Body" body=""
	I1206 10:47:15.472724  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:15.472994  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:15.972549  399286 type.go:168] "Request Body" body=""
	I1206 10:47:15.972625  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:15.972986  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:16.472737  399286 type.go:168] "Request Body" body=""
	I1206 10:47:16.472818  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:16.473143  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:16.473210  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:16.972875  399286 type.go:168] "Request Body" body=""
	I1206 10:47:16.972953  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:16.973285  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:17.473095  399286 type.go:168] "Request Body" body=""
	I1206 10:47:17.473172  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:17.473522  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:17.973337  399286 type.go:168] "Request Body" body=""
	I1206 10:47:17.973427  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:17.973777  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:18.473404  399286 type.go:168] "Request Body" body=""
	I1206 10:47:18.473476  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:18.473741  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:18.473792  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:18.972507  399286 type.go:168] "Request Body" body=""
	I1206 10:47:18.972607  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:18.972936  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:19.472645  399286 type.go:168] "Request Body" body=""
	I1206 10:47:19.472722  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:19.473093  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:19.972508  399286 type.go:168] "Request Body" body=""
	I1206 10:47:19.972581  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:19.972852  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:20.472581  399286 type.go:168] "Request Body" body=""
	I1206 10:47:20.472666  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:20.472982  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:20.972583  399286 type.go:168] "Request Body" body=""
	I1206 10:47:20.972663  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:20.973010  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:20.973076  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:21.472734  399286 type.go:168] "Request Body" body=""
	I1206 10:47:21.472809  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:21.473085  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:21.972896  399286 type.go:168] "Request Body" body=""
	I1206 10:47:21.972994  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:21.973342  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:22.473130  399286 type.go:168] "Request Body" body=""
	I1206 10:47:22.473211  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:22.473543  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:22.973319  399286 type.go:168] "Request Body" body=""
	I1206 10:47:22.973388  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:22.973646  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:22.973687  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:23.473432  399286 type.go:168] "Request Body" body=""
	I1206 10:47:23.473517  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:23.473913  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:23.972485  399286 type.go:168] "Request Body" body=""
	I1206 10:47:23.972564  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:23.972906  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:24.472560  399286 type.go:168] "Request Body" body=""
	I1206 10:47:24.472635  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:24.472973  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:24.972571  399286 type.go:168] "Request Body" body=""
	I1206 10:47:24.972646  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:24.973006  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:25.472733  399286 type.go:168] "Request Body" body=""
	I1206 10:47:25.472817  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:25.473171  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:25.473229  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:25.972533  399286 type.go:168] "Request Body" body=""
	I1206 10:47:25.972606  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:25.972871  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:26.472886  399286 type.go:168] "Request Body" body=""
	I1206 10:47:26.472960  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:26.473323  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:26.972934  399286 type.go:168] "Request Body" body=""
	I1206 10:47:26.973008  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:26.973356  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:27.473095  399286 type.go:168] "Request Body" body=""
	I1206 10:47:27.473172  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:27.473531  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:27.473599  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:27.973360  399286 type.go:168] "Request Body" body=""
	I1206 10:47:27.973441  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:27.973782  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:28.472538  399286 type.go:168] "Request Body" body=""
	I1206 10:47:28.472619  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:28.472967  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:28.972640  399286 type.go:168] "Request Body" body=""
	I1206 10:47:28.972710  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:28.973005  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:29.472569  399286 type.go:168] "Request Body" body=""
	I1206 10:47:29.472650  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:29.472987  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:29.972580  399286 type.go:168] "Request Body" body=""
	I1206 10:47:29.972672  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:29.973033  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:29.973089  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:30.472734  399286 type.go:168] "Request Body" body=""
	I1206 10:47:30.472805  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:30.473130  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:30.972629  399286 type.go:168] "Request Body" body=""
	I1206 10:47:30.972708  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:30.973010  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:31.472578  399286 type.go:168] "Request Body" body=""
	I1206 10:47:31.472654  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:31.472987  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:31.972822  399286 type.go:168] "Request Body" body=""
	I1206 10:47:31.972897  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:31.973160  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:31.973200  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:32.472914  399286 type.go:168] "Request Body" body=""
	I1206 10:47:32.473004  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:32.473347  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:32.973159  399286 type.go:168] "Request Body" body=""
	I1206 10:47:32.973233  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:32.973581  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:33.473360  399286 type.go:168] "Request Body" body=""
	I1206 10:47:33.473433  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:33.473718  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:33.973498  399286 type.go:168] "Request Body" body=""
	I1206 10:47:33.973577  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:33.973949  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:33.974029  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:34.472526  399286 type.go:168] "Request Body" body=""
	I1206 10:47:34.472608  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:34.472947  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:34.972533  399286 type.go:168] "Request Body" body=""
	I1206 10:47:34.972628  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:34.972989  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:35.472565  399286 type.go:168] "Request Body" body=""
	I1206 10:47:35.472646  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:35.473023  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:35.972740  399286 type.go:168] "Request Body" body=""
	I1206 10:47:35.972818  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:35.973158  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:36.473170  399286 type.go:168] "Request Body" body=""
	I1206 10:47:36.473254  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:36.473528  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:36.473569  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:36.972525  399286 type.go:168] "Request Body" body=""
	I1206 10:47:36.972602  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:36.972938  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:37.472576  399286 type.go:168] "Request Body" body=""
	I1206 10:47:37.472656  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:37.473000  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:37.972555  399286 type.go:168] "Request Body" body=""
	I1206 10:47:37.972627  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:37.972895  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:38.472579  399286 type.go:168] "Request Body" body=""
	I1206 10:47:38.472663  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:38.473008  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:38.972569  399286 type.go:168] "Request Body" body=""
	I1206 10:47:38.972649  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:38.973012  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:38.973070  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:39.472526  399286 type.go:168] "Request Body" body=""
	I1206 10:47:39.472602  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:39.472864  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:39.972558  399286 type.go:168] "Request Body" body=""
	I1206 10:47:39.972636  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:39.972965  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:40.472567  399286 type.go:168] "Request Body" body=""
	I1206 10:47:40.472639  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:40.472972  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:40.972512  399286 type.go:168] "Request Body" body=""
	I1206 10:47:40.972588  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:40.972883  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:41.472541  399286 type.go:168] "Request Body" body=""
	I1206 10:47:41.472626  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:41.472980  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:41.473038  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:41.972863  399286 type.go:168] "Request Body" body=""
	I1206 10:47:41.972939  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:41.973307  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:42.472600  399286 type.go:168] "Request Body" body=""
	I1206 10:47:42.472680  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:42.472974  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:42.972681  399286 type.go:168] "Request Body" body=""
	I1206 10:47:42.972759  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:42.973100  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:43.472601  399286 type.go:168] "Request Body" body=""
	I1206 10:47:43.472694  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:43.473056  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:43.473116  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:43.972500  399286 type.go:168] "Request Body" body=""
	I1206 10:47:43.972579  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:43.972899  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:44.472587  399286 type.go:168] "Request Body" body=""
	I1206 10:47:44.472675  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:44.473014  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:44.972574  399286 type.go:168] "Request Body" body=""
	I1206 10:47:44.972651  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:44.973031  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:45.472655  399286 type.go:168] "Request Body" body=""
	I1206 10:47:45.472726  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:45.473026  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:45.972718  399286 type.go:168] "Request Body" body=""
	I1206 10:47:45.972800  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:45.973152  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:45.973210  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:46.472509  399286 type.go:168] "Request Body" body=""
	I1206 10:47:46.472600  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:46.472959  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:46.972791  399286 type.go:168] "Request Body" body=""
	I1206 10:47:46.972860  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:46.973128  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:47.472798  399286 type.go:168] "Request Body" body=""
	I1206 10:47:47.472874  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:47.473208  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:47.972568  399286 type.go:168] "Request Body" body=""
	I1206 10:47:47.972648  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:47.972980  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:48.473410  399286 type.go:168] "Request Body" body=""
	I1206 10:47:48.473482  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:48.473747  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:48.473789  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:48.973486  399286 type.go:168] "Request Body" body=""
	I1206 10:47:48.973564  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:48.973890  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:49.472605  399286 type.go:168] "Request Body" body=""
	I1206 10:47:49.472724  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:49.473137  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:49.972518  399286 type.go:168] "Request Body" body=""
	I1206 10:47:49.972592  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:49.972867  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:50.472548  399286 type.go:168] "Request Body" body=""
	I1206 10:47:50.472628  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:50.472960  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:50.972581  399286 type.go:168] "Request Body" body=""
	I1206 10:47:50.972656  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:50.972999  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:50.973058  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:51.472529  399286 type.go:168] "Request Body" body=""
	I1206 10:47:51.472601  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:51.472873  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:51.972855  399286 type.go:168] "Request Body" body=""
	I1206 10:47:51.972934  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:51.973251  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:52.472527  399286 type.go:168] "Request Body" body=""
	I1206 10:47:52.472603  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:52.472925  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:52.972632  399286 type.go:168] "Request Body" body=""
	I1206 10:47:52.972710  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:52.973009  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:53.472551  399286 type.go:168] "Request Body" body=""
	I1206 10:47:53.472635  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:53.473004  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:53.473083  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:53.972778  399286 type.go:168] "Request Body" body=""
	I1206 10:47:53.972868  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:53.973278  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:54.472595  399286 type.go:168] "Request Body" body=""
	I1206 10:47:54.472680  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:54.473008  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:54.972544  399286 type.go:168] "Request Body" body=""
	I1206 10:47:54.972624  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:54.972997  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:55.472544  399286 type.go:168] "Request Body" body=""
	I1206 10:47:55.472633  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:55.472967  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:55.972686  399286 type.go:168] "Request Body" body=""
	I1206 10:47:55.972759  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:55.973084  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:55.973129  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:56.472527  399286 type.go:168] "Request Body" body=""
	I1206 10:47:56.472600  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:56.472935  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:56.972607  399286 type.go:168] "Request Body" body=""
	I1206 10:47:56.972688  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:56.973052  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:57.472495  399286 type.go:168] "Request Body" body=""
	I1206 10:47:57.472571  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:57.472885  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:57.972579  399286 type.go:168] "Request Body" body=""
	I1206 10:47:57.972653  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:57.972989  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:58.472576  399286 type.go:168] "Request Body" body=""
	I1206 10:47:58.472654  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:58.472981  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:47:58.473038  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:47:58.972524  399286 type.go:168] "Request Body" body=""
	I1206 10:47:58.972595  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:58.972920  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:59.472623  399286 type.go:168] "Request Body" body=""
	I1206 10:47:59.472702  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:59.473058  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:47:59.972774  399286 type.go:168] "Request Body" body=""
	I1206 10:47:59.972856  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:47:59.973198  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:00.472879  399286 type.go:168] "Request Body" body=""
	I1206 10:48:00.472963  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:00.473302  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:00.473350  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:00.973100  399286 type.go:168] "Request Body" body=""
	I1206 10:48:00.973182  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:00.973500  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:01.473348  399286 type.go:168] "Request Body" body=""
	I1206 10:48:01.473426  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:01.473749  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:01.972487  399286 type.go:168] "Request Body" body=""
	I1206 10:48:01.972565  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:01.972839  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:02.472524  399286 type.go:168] "Request Body" body=""
	I1206 10:48:02.472604  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:02.472916  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:02.972566  399286 type.go:168] "Request Body" body=""
	I1206 10:48:02.972640  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:02.972945  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:02.972990  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:03.472551  399286 type.go:168] "Request Body" body=""
	I1206 10:48:03.472641  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:03.472970  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:03.972530  399286 type.go:168] "Request Body" body=""
	I1206 10:48:03.972607  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:03.972945  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:04.472651  399286 type.go:168] "Request Body" body=""
	I1206 10:48:04.472730  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:04.473079  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:04.972502  399286 type.go:168] "Request Body" body=""
	I1206 10:48:04.972576  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:04.972860  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:05.472568  399286 type.go:168] "Request Body" body=""
	I1206 10:48:05.472646  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:05.473022  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:05.473077  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:05.972744  399286 type.go:168] "Request Body" body=""
	I1206 10:48:05.972834  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:05.973199  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:06.473241  399286 type.go:168] "Request Body" body=""
	I1206 10:48:06.473315  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:06.473604  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:06.972611  399286 type.go:168] "Request Body" body=""
	I1206 10:48:06.972691  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:06.972992  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:07.472569  399286 type.go:168] "Request Body" body=""
	I1206 10:48:07.472658  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:07.473030  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:07.972580  399286 type.go:168] "Request Body" body=""
	I1206 10:48:07.972659  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:07.972925  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:07.972966  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:08.472573  399286 type.go:168] "Request Body" body=""
	I1206 10:48:08.472665  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:08.472999  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:08.972712  399286 type.go:168] "Request Body" body=""
	I1206 10:48:08.972805  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:08.973106  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:09.472510  399286 type.go:168] "Request Body" body=""
	I1206 10:48:09.472584  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:09.472910  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:09.972623  399286 type.go:168] "Request Body" body=""
	I1206 10:48:09.972697  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:09.973067  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:09.973119  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:10.472815  399286 type.go:168] "Request Body" body=""
	I1206 10:48:10.472886  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:10.473224  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:10.972551  399286 type.go:168] "Request Body" body=""
	I1206 10:48:10.972627  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:10.972947  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:11.472597  399286 type.go:168] "Request Body" body=""
	I1206 10:48:11.472675  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:11.473029  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:11.972904  399286 type.go:168] "Request Body" body=""
	I1206 10:48:11.972981  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:11.973328  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:11.973383  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:12.472840  399286 type.go:168] "Request Body" body=""
	I1206 10:48:12.472917  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:12.473212  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:12.972545  399286 type.go:168] "Request Body" body=""
	I1206 10:48:12.972620  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:12.972959  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:13.472663  399286 type.go:168] "Request Body" body=""
	I1206 10:48:13.472740  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:13.473115  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:13.972809  399286 type.go:168] "Request Body" body=""
	I1206 10:48:13.972882  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:13.973148  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:14.472560  399286 type.go:168] "Request Body" body=""
	I1206 10:48:14.472633  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:14.472974  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:14.473026  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:14.972547  399286 type.go:168] "Request Body" body=""
	I1206 10:48:14.972633  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:14.972981  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:15.472494  399286 type.go:168] "Request Body" body=""
	I1206 10:48:15.472572  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:15.472888  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:15.972557  399286 type.go:168] "Request Body" body=""
	I1206 10:48:15.972632  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:15.973009  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:16.472796  399286 type.go:168] "Request Body" body=""
	I1206 10:48:16.472875  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:16.473235  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:16.473293  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:16.972964  399286 type.go:168] "Request Body" body=""
	I1206 10:48:16.973036  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:16.973307  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:17.473074  399286 type.go:168] "Request Body" body=""
	I1206 10:48:17.473147  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:17.473485  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:17.973295  399286 type.go:168] "Request Body" body=""
	I1206 10:48:17.973378  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:17.973725  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:18.473438  399286 type.go:168] "Request Body" body=""
	I1206 10:48:18.473505  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:18.473841  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:18.473920  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:18.972621  399286 type.go:168] "Request Body" body=""
	I1206 10:48:18.972695  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:18.973065  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:19.472598  399286 type.go:168] "Request Body" body=""
	I1206 10:48:19.472705  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:19.473114  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:19.972515  399286 type.go:168] "Request Body" body=""
	I1206 10:48:19.972585  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:19.972856  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:20.472548  399286 type.go:168] "Request Body" body=""
	I1206 10:48:20.472625  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:20.472958  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:20.972576  399286 type.go:168] "Request Body" body=""
	I1206 10:48:20.972660  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:20.973023  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:20.973083  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:21.472615  399286 type.go:168] "Request Body" body=""
	I1206 10:48:21.472686  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:21.472963  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:21.972979  399286 type.go:168] "Request Body" body=""
	I1206 10:48:21.973061  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:21.973404  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:22.473200  399286 type.go:168] "Request Body" body=""
	I1206 10:48:22.473283  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:22.473635  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:22.973360  399286 type.go:168] "Request Body" body=""
	I1206 10:48:22.973441  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:22.973782  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:22.973843  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:23.472499  399286 type.go:168] "Request Body" body=""
	I1206 10:48:23.472580  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:23.472916  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:23.972556  399286 type.go:168] "Request Body" body=""
	I1206 10:48:23.972636  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:23.972975  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:24.472447  399286 type.go:168] "Request Body" body=""
	I1206 10:48:24.472514  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:24.472774  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:24.972471  399286 type.go:168] "Request Body" body=""
	I1206 10:48:24.972545  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:24.972884  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:25.472578  399286 type.go:168] "Request Body" body=""
	I1206 10:48:25.472667  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:25.473021  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:25.473076  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:25.972593  399286 type.go:168] "Request Body" body=""
	I1206 10:48:25.972669  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:25.972945  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:26.472488  399286 type.go:168] "Request Body" body=""
	I1206 10:48:26.472562  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:26.472906  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:26.972576  399286 type.go:168] "Request Body" body=""
	I1206 10:48:26.972660  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:26.973014  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:27.472549  399286 type.go:168] "Request Body" body=""
	I1206 10:48:27.472616  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:27.472894  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:27.972571  399286 type.go:168] "Request Body" body=""
	I1206 10:48:27.972663  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:27.973010  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:27.973066  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:28.472755  399286 type.go:168] "Request Body" body=""
	I1206 10:48:28.472833  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:28.473174  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:28.972507  399286 type.go:168] "Request Body" body=""
	I1206 10:48:28.972584  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:28.972913  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:29.472601  399286 type.go:168] "Request Body" body=""
	I1206 10:48:29.472680  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:29.473059  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:29.972567  399286 type.go:168] "Request Body" body=""
	I1206 10:48:29.972642  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:29.972943  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:30.472524  399286 type.go:168] "Request Body" body=""
	I1206 10:48:30.472616  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:30.472907  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:30.472969  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:30.972572  399286 type.go:168] "Request Body" body=""
	I1206 10:48:30.972656  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:30.973011  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:31.472727  399286 type.go:168] "Request Body" body=""
	I1206 10:48:31.472811  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:31.473170  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:31.972852  399286 type.go:168] "Request Body" body=""
	I1206 10:48:31.972934  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:31.973257  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:32.472954  399286 type.go:168] "Request Body" body=""
	I1206 10:48:32.473032  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:32.473401  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:32.473457  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:32.973252  399286 type.go:168] "Request Body" body=""
	I1206 10:48:32.973327  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:32.973655  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:33.473425  399286 type.go:168] "Request Body" body=""
	I1206 10:48:33.473493  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:33.473760  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:33.972474  399286 type.go:168] "Request Body" body=""
	I1206 10:48:33.972572  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:33.972878  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:34.472627  399286 type.go:168] "Request Body" body=""
	I1206 10:48:34.472747  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:34.473114  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:34.972774  399286 type.go:168] "Request Body" body=""
	I1206 10:48:34.972854  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:34.973228  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:34.973282  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:35.472597  399286 type.go:168] "Request Body" body=""
	I1206 10:48:35.472674  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:35.473044  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:35.972740  399286 type.go:168] "Request Body" body=""
	I1206 10:48:35.972817  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:35.973175  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:36.473166  399286 type.go:168] "Request Body" body=""
	I1206 10:48:36.473240  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:36.473506  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:36.972504  399286 type.go:168] "Request Body" body=""
	I1206 10:48:36.972595  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:36.972967  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:37.472704  399286 type.go:168] "Request Body" body=""
	I1206 10:48:37.472781  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:37.473151  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:37.473210  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:37.972584  399286 type.go:168] "Request Body" body=""
	I1206 10:48:37.972659  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:37.972980  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:38.472674  399286 type.go:168] "Request Body" body=""
	I1206 10:48:38.472751  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:38.473096  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:38.972809  399286 type.go:168] "Request Body" body=""
	I1206 10:48:38.972894  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:38.973234  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:39.472512  399286 type.go:168] "Request Body" body=""
	I1206 10:48:39.472591  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:39.472888  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:39.972573  399286 type.go:168] "Request Body" body=""
	I1206 10:48:39.972654  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:39.973005  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:39.973062  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:40.472729  399286 type.go:168] "Request Body" body=""
	I1206 10:48:40.472807  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:40.473118  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:40.972789  399286 type.go:168] "Request Body" body=""
	I1206 10:48:40.972865  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:40.973175  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:41.472555  399286 type.go:168] "Request Body" body=""
	I1206 10:48:41.472632  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:41.473019  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:41.972805  399286 type.go:168] "Request Body" body=""
	I1206 10:48:41.972889  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:41.973231  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:41.973283  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:42.472645  399286 type.go:168] "Request Body" body=""
	I1206 10:48:42.472724  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:42.473014  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:42.972545  399286 type.go:168] "Request Body" body=""
	I1206 10:48:42.972620  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:42.972971  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:43.472630  399286 type.go:168] "Request Body" body=""
	I1206 10:48:43.472707  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:43.473070  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:43.972757  399286 type.go:168] "Request Body" body=""
	I1206 10:48:43.972837  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:43.973118  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:44.472549  399286 type.go:168] "Request Body" body=""
	I1206 10:48:44.472623  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:44.472965  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:44.473024  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:44.972697  399286 type.go:168] "Request Body" body=""
	I1206 10:48:44.972778  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:44.973152  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:45.472603  399286 type.go:168] "Request Body" body=""
	I1206 10:48:45.472680  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:45.472954  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:45.972650  399286 type.go:168] "Request Body" body=""
	I1206 10:48:45.972731  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:45.973088  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:46.472844  399286 type.go:168] "Request Body" body=""
	I1206 10:48:46.472930  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:46.473240  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:46.473286  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:46.972876  399286 type.go:168] "Request Body" body=""
	I1206 10:48:46.972956  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:46.973229  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:47.472914  399286 type.go:168] "Request Body" body=""
	I1206 10:48:47.472997  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:47.473351  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:47.973159  399286 type.go:168] "Request Body" body=""
	I1206 10:48:47.973238  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:47.973572  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:48.473300  399286 type.go:168] "Request Body" body=""
	I1206 10:48:48.473369  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:48.473631  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:48.473676  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:48.973423  399286 type.go:168] "Request Body" body=""
	I1206 10:48:48.973495  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:48.973838  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:49.472583  399286 type.go:168] "Request Body" body=""
	I1206 10:48:49.472657  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:49.472982  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:49.972501  399286 type.go:168] "Request Body" body=""
	I1206 10:48:49.972577  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:49.972905  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:50.472592  399286 type.go:168] "Request Body" body=""
	I1206 10:48:50.472663  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:50.473004  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:50.972730  399286 type.go:168] "Request Body" body=""
	I1206 10:48:50.972813  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:50.973202  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:50.973262  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:51.472848  399286 type.go:168] "Request Body" body=""
	I1206 10:48:51.472917  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:51.473221  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:51.973010  399286 type.go:168] "Request Body" body=""
	I1206 10:48:51.973086  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:51.973413  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:52.473222  399286 type.go:168] "Request Body" body=""
	I1206 10:48:52.473300  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:52.473671  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:52.973437  399286 type.go:168] "Request Body" body=""
	I1206 10:48:52.973506  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:52.973775  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:52.973815  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:53.472497  399286 type.go:168] "Request Body" body=""
	I1206 10:48:53.472572  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:53.472897  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:53.972574  399286 type.go:168] "Request Body" body=""
	I1206 10:48:53.972662  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:53.973051  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:54.472686  399286 type.go:168] "Request Body" body=""
	I1206 10:48:54.472759  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:54.473019  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:54.972701  399286 type.go:168] "Request Body" body=""
	I1206 10:48:54.972840  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:54.973196  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:55.472921  399286 type.go:168] "Request Body" body=""
	I1206 10:48:55.473005  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:55.473348  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:55.473405  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:55.973139  399286 type.go:168] "Request Body" body=""
	I1206 10:48:55.973209  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:55.973524  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:56.473496  399286 type.go:168] "Request Body" body=""
	I1206 10:48:56.473586  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:56.473930  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:56.973058  399286 type.go:168] "Request Body" body=""
	I1206 10:48:56.973167  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:56.973523  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:57.473253  399286 type.go:168] "Request Body" body=""
	I1206 10:48:57.473322  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:57.473613  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:57.473655  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:48:57.973400  399286 type.go:168] "Request Body" body=""
	I1206 10:48:57.973472  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:57.973805  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:58.472543  399286 type.go:168] "Request Body" body=""
	I1206 10:48:58.472624  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:58.472965  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:58.972545  399286 type.go:168] "Request Body" body=""
	I1206 10:48:58.972612  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:58.972871  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:59.472552  399286 type.go:168] "Request Body" body=""
	I1206 10:48:59.472628  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:59.472962  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:48:59.972576  399286 type.go:168] "Request Body" body=""
	I1206 10:48:59.972655  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:48:59.973033  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:48:59.973088  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:00.472755  399286 type.go:168] "Request Body" body=""
	I1206 10:49:00.472825  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:00.473159  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:00.972543  399286 type.go:168] "Request Body" body=""
	I1206 10:49:00.972623  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:00.972982  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:01.472701  399286 type.go:168] "Request Body" body=""
	I1206 10:49:01.472779  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:01.473107  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:01.972890  399286 type.go:168] "Request Body" body=""
	I1206 10:49:01.972966  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:01.973305  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:01.973365  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:02.473122  399286 type.go:168] "Request Body" body=""
	I1206 10:49:02.473195  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:02.473527  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:02.973299  399286 type.go:168] "Request Body" body=""
	I1206 10:49:02.973372  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:02.973717  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:03.473484  399286 type.go:168] "Request Body" body=""
	I1206 10:49:03.473561  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:03.473909  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:03.972610  399286 type.go:168] "Request Body" body=""
	I1206 10:49:03.972692  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:03.972999  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:04.472572  399286 type.go:168] "Request Body" body=""
	I1206 10:49:04.472655  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:04.473009  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:04.473069  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:04.972630  399286 type.go:168] "Request Body" body=""
	I1206 10:49:04.972705  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:04.973016  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:05.472726  399286 type.go:168] "Request Body" body=""
	I1206 10:49:05.472816  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:05.473184  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:05.972911  399286 type.go:168] "Request Body" body=""
	I1206 10:49:05.972991  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:05.973382  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:06.473278  399286 type.go:168] "Request Body" body=""
	I1206 10:49:06.473354  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:06.473638  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:06.473679  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:06.972552  399286 type.go:168] "Request Body" body=""
	I1206 10:49:06.972644  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:06.972984  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:07.472897  399286 type.go:168] "Request Body" body=""
	I1206 10:49:07.472974  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:07.473313  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:07.973068  399286 type.go:168] "Request Body" body=""
	I1206 10:49:07.973145  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:07.973511  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:08.473278  399286 type.go:168] "Request Body" body=""
	I1206 10:49:08.473354  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:08.473693  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:08.473753  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:08.972455  399286 type.go:168] "Request Body" body=""
	I1206 10:49:08.972536  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:08.972879  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:09.472560  399286 type.go:168] "Request Body" body=""
	I1206 10:49:09.472627  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:09.472882  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:09.972560  399286 type.go:168] "Request Body" body=""
	I1206 10:49:09.972633  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:09.972993  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:10.472707  399286 type.go:168] "Request Body" body=""
	I1206 10:49:10.472787  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:10.473125  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:10.972519  399286 type.go:168] "Request Body" body=""
	I1206 10:49:10.972593  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:10.972923  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:10.972987  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:11.472665  399286 type.go:168] "Request Body" body=""
	I1206 10:49:11.472741  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:11.473101  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:11.972857  399286 type.go:168] "Request Body" body=""
	I1206 10:49:11.972931  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:11.973257  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:12.472588  399286 type.go:168] "Request Body" body=""
	I1206 10:49:12.472664  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:12.472989  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:12.972552  399286 type.go:168] "Request Body" body=""
	I1206 10:49:12.972628  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:12.972971  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:12.973026  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:13.472582  399286 type.go:168] "Request Body" body=""
	I1206 10:49:13.472659  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:13.473010  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:13.972527  399286 type.go:168] "Request Body" body=""
	I1206 10:49:13.972603  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:13.972984  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:14.472669  399286 type.go:168] "Request Body" body=""
	I1206 10:49:14.472750  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:14.473111  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:14.972837  399286 type.go:168] "Request Body" body=""
	I1206 10:49:14.972917  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:14.973262  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:14.973321  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:15.472591  399286 type.go:168] "Request Body" body=""
	I1206 10:49:15.472663  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:15.472956  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:15.972563  399286 type.go:168] "Request Body" body=""
	I1206 10:49:15.972639  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:15.972991  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:16.472531  399286 type.go:168] "Request Body" body=""
	I1206 10:49:16.472616  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:16.472996  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:16.972771  399286 type.go:168] "Request Body" body=""
	I1206 10:49:16.972841  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:16.973118  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:17.472567  399286 type.go:168] "Request Body" body=""
	I1206 10:49:17.472648  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:17.472996  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:17.473050  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:17.972717  399286 type.go:168] "Request Body" body=""
	I1206 10:49:17.972792  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:17.973099  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:18.472513  399286 type.go:168] "Request Body" body=""
	I1206 10:49:18.472587  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:18.472880  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:18.972548  399286 type.go:168] "Request Body" body=""
	I1206 10:49:18.972624  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:18.972945  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:19.472558  399286 type.go:168] "Request Body" body=""
	I1206 10:49:19.472639  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:19.472985  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:19.972653  399286 type.go:168] "Request Body" body=""
	I1206 10:49:19.972729  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:19.973082  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:19.973145  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:20.472552  399286 type.go:168] "Request Body" body=""
	I1206 10:49:20.472627  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:20.472962  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:20.972548  399286 type.go:168] "Request Body" body=""
	I1206 10:49:20.972633  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:20.972967  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:21.472516  399286 type.go:168] "Request Body" body=""
	I1206 10:49:21.472590  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:21.472909  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:21.972841  399286 type.go:168] "Request Body" body=""
	I1206 10:49:21.972916  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:21.973264  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:21.973322  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:22.473122  399286 type.go:168] "Request Body" body=""
	I1206 10:49:22.473197  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:22.473559  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:22.973364  399286 type.go:168] "Request Body" body=""
	I1206 10:49:22.973440  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:22.973787  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:23.472556  399286 type.go:168] "Request Body" body=""
	I1206 10:49:23.472643  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:23.472989  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:23.972688  399286 type.go:168] "Request Body" body=""
	I1206 10:49:23.972770  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:23.973107  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:24.472802  399286 type.go:168] "Request Body" body=""
	I1206 10:49:24.472877  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:24.473193  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:24.473243  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:24.972584  399286 type.go:168] "Request Body" body=""
	I1206 10:49:24.972665  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:24.973023  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:25.472744  399286 type.go:168] "Request Body" body=""
	I1206 10:49:25.472827  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:25.473187  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:25.972875  399286 type.go:168] "Request Body" body=""
	I1206 10:49:25.972942  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:25.973212  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:26.473315  399286 type.go:168] "Request Body" body=""
	I1206 10:49:26.473401  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:26.473746  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:26.473798  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:26.972480  399286 type.go:168] "Request Body" body=""
	I1206 10:49:26.972564  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:26.972910  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:27.472459  399286 type.go:168] "Request Body" body=""
	I1206 10:49:27.472532  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:27.472791  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:27.972488  399286 type.go:168] "Request Body" body=""
	I1206 10:49:27.972566  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:27.972886  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:28.472548  399286 type.go:168] "Request Body" body=""
	I1206 10:49:28.472623  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:28.472959  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:28.972639  399286 type.go:168] "Request Body" body=""
	I1206 10:49:28.972711  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:28.973000  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:28.973048  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:29.472558  399286 type.go:168] "Request Body" body=""
	I1206 10:49:29.472637  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:29.472984  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:29.972552  399286 type.go:168] "Request Body" body=""
	I1206 10:49:29.972639  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:29.972980  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:30.472653  399286 type.go:168] "Request Body" body=""
	I1206 10:49:30.472729  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:30.473004  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:30.972581  399286 type.go:168] "Request Body" body=""
	I1206 10:49:30.972663  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:30.972997  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:31.472551  399286 type.go:168] "Request Body" body=""
	I1206 10:49:31.472633  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:31.472995  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:31.473053  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:31.972765  399286 type.go:168] "Request Body" body=""
	I1206 10:49:31.972832  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:31.973098  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:32.472547  399286 type.go:168] "Request Body" body=""
	I1206 10:49:32.472631  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:32.473016  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:32.972568  399286 type.go:168] "Request Body" body=""
	I1206 10:49:32.972645  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:32.972982  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:33.472517  399286 type.go:168] "Request Body" body=""
	I1206 10:49:33.472591  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:33.472911  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:33.972503  399286 type.go:168] "Request Body" body=""
	I1206 10:49:33.972576  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:33.972901  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:33.972964  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:34.472657  399286 type.go:168] "Request Body" body=""
	I1206 10:49:34.472734  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:34.473129  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:34.972818  399286 type.go:168] "Request Body" body=""
	I1206 10:49:34.972889  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:34.973175  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:35.472878  399286 type.go:168] "Request Body" body=""
	I1206 10:49:35.472955  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:35.473329  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:35.973094  399286 type.go:168] "Request Body" body=""
	I1206 10:49:35.973174  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:35.973494  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:35.973549  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:36.472432  399286 type.go:168] "Request Body" body=""
	I1206 10:49:36.472505  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:36.472781  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:36.972828  399286 type.go:168] "Request Body" body=""
	I1206 10:49:36.972905  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:36.973252  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:37.472563  399286 type.go:168] "Request Body" body=""
	I1206 10:49:37.472637  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:37.472994  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:37.972683  399286 type.go:168] "Request Body" body=""
	I1206 10:49:37.972763  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:37.973077  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:38.472554  399286 type.go:168] "Request Body" body=""
	I1206 10:49:38.472633  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:38.472969  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:38.473033  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:38.972729  399286 type.go:168] "Request Body" body=""
	I1206 10:49:38.972808  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:38.973142  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:39.472497  399286 type.go:168] "Request Body" body=""
	I1206 10:49:39.472571  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:39.472854  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:39.972565  399286 type.go:168] "Request Body" body=""
	I1206 10:49:39.972639  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:39.972983  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:40.472566  399286 type.go:168] "Request Body" body=""
	I1206 10:49:40.472647  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:40.472966  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:40.972679  399286 type.go:168] "Request Body" body=""
	I1206 10:49:40.972760  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:40.973065  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:40.973121  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:41.472568  399286 type.go:168] "Request Body" body=""
	I1206 10:49:41.472657  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:41.473025  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:41.972922  399286 type.go:168] "Request Body" body=""
	I1206 10:49:41.972998  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:41.973339  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:42.473053  399286 type.go:168] "Request Body" body=""
	I1206 10:49:42.473124  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:42.473408  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:42.973276  399286 type.go:168] "Request Body" body=""
	I1206 10:49:42.973355  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:42.973694  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:42.973752  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:43.473503  399286 type.go:168] "Request Body" body=""
	I1206 10:49:43.473574  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:43.473918  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:43.972599  399286 type.go:168] "Request Body" body=""
	I1206 10:49:43.972670  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:43.973000  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:44.472572  399286 type.go:168] "Request Body" body=""
	I1206 10:49:44.472649  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:44.472982  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:44.972582  399286 type.go:168] "Request Body" body=""
	I1206 10:49:44.972670  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:44.973019  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:45.472513  399286 type.go:168] "Request Body" body=""
	I1206 10:49:45.472583  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:45.472857  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:45.472901  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:45.972615  399286 type.go:168] "Request Body" body=""
	I1206 10:49:45.972695  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:45.973015  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:46.472495  399286 type.go:168] "Request Body" body=""
	I1206 10:49:46.472575  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:46.472931  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:46.972626  399286 type.go:168] "Request Body" body=""
	I1206 10:49:46.974621  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:46.974930  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:47.472626  399286 type.go:168] "Request Body" body=""
	I1206 10:49:47.472726  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:47.473070  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:47.473127  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:47.972571  399286 type.go:168] "Request Body" body=""
	I1206 10:49:47.972667  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:47.972989  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:48.472520  399286 type.go:168] "Request Body" body=""
	I1206 10:49:48.472595  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:48.472920  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:48.972585  399286 type.go:168] "Request Body" body=""
	I1206 10:49:48.972669  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:48.973030  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:49.472592  399286 type.go:168] "Request Body" body=""
	I1206 10:49:49.472671  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:49.473006  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:49.972683  399286 type.go:168] "Request Body" body=""
	I1206 10:49:49.972754  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:49.973062  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:49.973108  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:50.472567  399286 type.go:168] "Request Body" body=""
	I1206 10:49:50.472644  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:50.472979  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:50.972718  399286 type.go:168] "Request Body" body=""
	I1206 10:49:50.972795  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:50.973180  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:51.472470  399286 type.go:168] "Request Body" body=""
	I1206 10:49:51.472554  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:51.472819  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:51.972847  399286 type.go:168] "Request Body" body=""
	I1206 10:49:51.972931  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:51.973379  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:51.973433  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:52.473217  399286 type.go:168] "Request Body" body=""
	I1206 10:49:52.473304  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:52.473657  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:52.973426  399286 type.go:168] "Request Body" body=""
	I1206 10:49:52.973497  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:52.973879  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:53.472575  399286 type.go:168] "Request Body" body=""
	I1206 10:49:53.472654  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:53.473006  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:53.972732  399286 type.go:168] "Request Body" body=""
	I1206 10:49:53.972810  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:53.973150  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:54.472486  399286 type.go:168] "Request Body" body=""
	I1206 10:49:54.472557  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:54.472823  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:54.472866  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:54.972537  399286 type.go:168] "Request Body" body=""
	I1206 10:49:54.972613  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:54.972990  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:55.472700  399286 type.go:168] "Request Body" body=""
	I1206 10:49:55.472773  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:55.473120  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:55.972590  399286 type.go:168] "Request Body" body=""
	I1206 10:49:55.972662  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:55.972932  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:56.472845  399286 type.go:168] "Request Body" body=""
	I1206 10:49:56.472928  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:56.473307  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:56.473367  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:56.973002  399286 type.go:168] "Request Body" body=""
	I1206 10:49:56.973079  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:56.973419  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:57.473158  399286 type.go:168] "Request Body" body=""
	I1206 10:49:57.473232  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:57.473497  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:57.973292  399286 type.go:168] "Request Body" body=""
	I1206 10:49:57.973367  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:57.973704  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:58.473494  399286 type.go:168] "Request Body" body=""
	I1206 10:49:58.473568  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:58.473902  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:49:58.473963  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:49:58.972557  399286 type.go:168] "Request Body" body=""
	I1206 10:49:58.972629  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:58.972908  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:59.472542  399286 type.go:168] "Request Body" body=""
	I1206 10:49:59.472622  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:59.472961  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:49:59.972698  399286 type.go:168] "Request Body" body=""
	I1206 10:49:59.972792  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:49:59.973143  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:00.475191  399286 type.go:168] "Request Body" body=""
	I1206 10:50:00.475305  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:00.475740  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:00.475791  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:00.972513  399286 type.go:168] "Request Body" body=""
	I1206 10:50:00.972593  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:00.972946  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:01.472607  399286 type.go:168] "Request Body" body=""
	I1206 10:50:01.472698  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:01.473000  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:01.972891  399286 type.go:168] "Request Body" body=""
	I1206 10:50:01.972964  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:01.973246  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:02.473133  399286 type.go:168] "Request Body" body=""
	I1206 10:50:02.473209  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:02.473541  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:02.973359  399286 type.go:168] "Request Body" body=""
	I1206 10:50:02.973436  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:02.973744  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:02.973794  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:03.472437  399286 type.go:168] "Request Body" body=""
	I1206 10:50:03.472515  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:03.472786  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:03.972517  399286 type.go:168] "Request Body" body=""
	I1206 10:50:03.972617  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:03.972974  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:04.472690  399286 type.go:168] "Request Body" body=""
	I1206 10:50:04.472770  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:04.473092  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:04.972613  399286 type.go:168] "Request Body" body=""
	I1206 10:50:04.972688  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:04.973025  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:05.472589  399286 type.go:168] "Request Body" body=""
	I1206 10:50:05.472662  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:05.472985  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:05.473041  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:05.972699  399286 type.go:168] "Request Body" body=""
	I1206 10:50:05.972774  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:05.973134  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:06.472872  399286 type.go:168] "Request Body" body=""
	I1206 10:50:06.472950  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:06.473229  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:06.973317  399286 type.go:168] "Request Body" body=""
	I1206 10:50:06.973399  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:06.973730  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:07.472451  399286 type.go:168] "Request Body" body=""
	I1206 10:50:07.472529  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:07.472886  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:07.972570  399286 type.go:168] "Request Body" body=""
	I1206 10:50:07.972651  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:07.972971  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:07.973027  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:08.472555  399286 type.go:168] "Request Body" body=""
	I1206 10:50:08.472629  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:08.472968  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:08.972685  399286 type.go:168] "Request Body" body=""
	I1206 10:50:08.972768  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:08.973126  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:09.472810  399286 type.go:168] "Request Body" body=""
	I1206 10:50:09.472880  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:09.473152  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:09.972581  399286 type.go:168] "Request Body" body=""
	I1206 10:50:09.972655  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:09.973005  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:09.973065  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:10.472745  399286 type.go:168] "Request Body" body=""
	I1206 10:50:10.472826  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:10.473165  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:10.972520  399286 type.go:168] "Request Body" body=""
	I1206 10:50:10.972626  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:10.972896  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:11.472575  399286 type.go:168] "Request Body" body=""
	I1206 10:50:11.472653  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:11.472988  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:11.972973  399286 type.go:168] "Request Body" body=""
	I1206 10:50:11.973057  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:11.973408  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:11.973457  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:12.473192  399286 type.go:168] "Request Body" body=""
	I1206 10:50:12.473263  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:12.473529  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:12.973277  399286 type.go:168] "Request Body" body=""
	I1206 10:50:12.973358  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:12.973714  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:13.473443  399286 type.go:168] "Request Body" body=""
	I1206 10:50:13.473531  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:13.473882  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:13.972569  399286 type.go:168] "Request Body" body=""
	I1206 10:50:13.972666  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:13.972977  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:14.472568  399286 type.go:168] "Request Body" body=""
	I1206 10:50:14.472647  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:14.472975  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:14.473038  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:14.972576  399286 type.go:168] "Request Body" body=""
	I1206 10:50:14.972652  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:14.972994  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:15.472548  399286 type.go:168] "Request Body" body=""
	I1206 10:50:15.472616  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:15.472889  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:15.972575  399286 type.go:168] "Request Body" body=""
	I1206 10:50:15.972652  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:15.972980  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:16.472885  399286 type.go:168] "Request Body" body=""
	I1206 10:50:16.472971  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:16.473326  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:16.473380  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:16.973027  399286 type.go:168] "Request Body" body=""
	I1206 10:50:16.973096  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:16.973374  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:17.473136  399286 type.go:168] "Request Body" body=""
	I1206 10:50:17.473212  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:17.473562  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:17.973242  399286 type.go:168] "Request Body" body=""
	I1206 10:50:17.973317  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:17.973682  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:18.473432  399286 type.go:168] "Request Body" body=""
	I1206 10:50:18.473500  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:18.473770  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:18.473813  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:18.972497  399286 type.go:168] "Request Body" body=""
	I1206 10:50:18.972578  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:18.972916  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:19.472629  399286 type.go:168] "Request Body" body=""
	I1206 10:50:19.472708  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:19.473031  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:19.972542  399286 type.go:168] "Request Body" body=""
	I1206 10:50:19.972615  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:19.972928  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:20.472572  399286 type.go:168] "Request Body" body=""
	I1206 10:50:20.472675  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:20.473027  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:20.972608  399286 type.go:168] "Request Body" body=""
	I1206 10:50:20.972691  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:20.973039  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:20.973093  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:21.472736  399286 type.go:168] "Request Body" body=""
	I1206 10:50:21.472814  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:21.473165  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:21.972862  399286 type.go:168] "Request Body" body=""
	I1206 10:50:21.972940  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:21.973280  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:22.473081  399286 type.go:168] "Request Body" body=""
	I1206 10:50:22.473165  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:22.473518  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:22.973293  399286 type.go:168] "Request Body" body=""
	I1206 10:50:22.973363  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:22.973736  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:22.973785  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:23.472452  399286 type.go:168] "Request Body" body=""
	I1206 10:50:23.472536  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:23.472892  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:23.972617  399286 type.go:168] "Request Body" body=""
	I1206 10:50:23.972696  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:23.973045  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:24.472734  399286 type.go:168] "Request Body" body=""
	I1206 10:50:24.472803  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:24.473087  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:24.972771  399286 type.go:168] "Request Body" body=""
	I1206 10:50:24.972846  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:24.973213  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:25.472766  399286 type.go:168] "Request Body" body=""
	I1206 10:50:25.472842  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:25.473168  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:25.473226  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:25.972592  399286 type.go:168] "Request Body" body=""
	I1206 10:50:25.972661  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:25.972949  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:26.472514  399286 type.go:168] "Request Body" body=""
	I1206 10:50:26.472594  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:26.472931  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:26.972899  399286 type.go:168] "Request Body" body=""
	I1206 10:50:26.972973  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:26.973261  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:27.473424  399286 type.go:168] "Request Body" body=""
	I1206 10:50:27.473500  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:27.473765  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:27.473815  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:27.972509  399286 type.go:168] "Request Body" body=""
	I1206 10:50:27.972592  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:27.972936  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:28.472486  399286 type.go:168] "Request Body" body=""
	I1206 10:50:28.472562  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:28.472923  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:28.972440  399286 type.go:168] "Request Body" body=""
	I1206 10:50:28.972512  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:28.972780  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:29.472546  399286 type.go:168] "Request Body" body=""
	I1206 10:50:29.472624  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:29.472988  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:29.972566  399286 type.go:168] "Request Body" body=""
	I1206 10:50:29.972650  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:29.972976  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:29.973030  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:30.472526  399286 type.go:168] "Request Body" body=""
	I1206 10:50:30.472595  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:30.472867  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:30.972538  399286 type.go:168] "Request Body" body=""
	I1206 10:50:30.972619  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:30.972967  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:31.472532  399286 type.go:168] "Request Body" body=""
	I1206 10:50:31.472614  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:31.472943  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:31.972822  399286 type.go:168] "Request Body" body=""
	I1206 10:50:31.972898  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:31.973163  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:31.973204  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:32.472846  399286 type.go:168] "Request Body" body=""
	I1206 10:50:32.472938  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:32.473300  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:32.973154  399286 type.go:168] "Request Body" body=""
	I1206 10:50:32.973228  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:32.973551  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:33.473237  399286 type.go:168] "Request Body" body=""
	I1206 10:50:33.473313  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:33.473581  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:33.973393  399286 type.go:168] "Request Body" body=""
	I1206 10:50:33.973465  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:33.973800  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:33.973854  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:34.472558  399286 type.go:168] "Request Body" body=""
	I1206 10:50:34.472637  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:34.472972  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:34.972519  399286 type.go:168] "Request Body" body=""
	I1206 10:50:34.972599  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:34.972924  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:35.472616  399286 type.go:168] "Request Body" body=""
	I1206 10:50:35.472695  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:35.473014  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:35.972579  399286 type.go:168] "Request Body" body=""
	I1206 10:50:35.972655  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:35.973034  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:36.472776  399286 type.go:168] "Request Body" body=""
	I1206 10:50:36.472844  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:36.473149  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:36.473213  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:36.972912  399286 type.go:168] "Request Body" body=""
	I1206 10:50:36.972989  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:36.973334  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:37.473153  399286 type.go:168] "Request Body" body=""
	I1206 10:50:37.473234  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:37.473545  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:37.973320  399286 type.go:168] "Request Body" body=""
	I1206 10:50:37.973389  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:37.973719  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:38.473508  399286 type.go:168] "Request Body" body=""
	I1206 10:50:38.473585  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:38.473917  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:38.473974  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:38.972671  399286 type.go:168] "Request Body" body=""
	I1206 10:50:38.972749  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:38.973130  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:39.472813  399286 type.go:168] "Request Body" body=""
	I1206 10:50:39.472890  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:39.473190  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:39.972557  399286 type.go:168] "Request Body" body=""
	I1206 10:50:39.972649  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:39.972986  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:40.472571  399286 type.go:168] "Request Body" body=""
	I1206 10:50:40.472658  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:40.472975  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:40.972515  399286 type.go:168] "Request Body" body=""
	I1206 10:50:40.972584  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:40.972892  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:40.972940  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:41.472601  399286 type.go:168] "Request Body" body=""
	I1206 10:50:41.472684  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:41.473063  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:41.972896  399286 type.go:168] "Request Body" body=""
	I1206 10:50:41.972981  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:41.973322  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:42.472688  399286 type.go:168] "Request Body" body=""
	I1206 10:50:42.472753  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:42.473021  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:42.972603  399286 type.go:168] "Request Body" body=""
	I1206 10:50:42.972678  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:42.973024  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:42.973077  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:43.472713  399286 type.go:168] "Request Body" body=""
	I1206 10:50:43.472795  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:43.473163  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:43.972566  399286 type.go:168] "Request Body" body=""
	I1206 10:50:43.972641  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:43.972943  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:44.472578  399286 type.go:168] "Request Body" body=""
	I1206 10:50:44.472651  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:44.472949  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:44.972603  399286 type.go:168] "Request Body" body=""
	I1206 10:50:44.972680  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:44.973055  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:44.973112  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:45.472760  399286 type.go:168] "Request Body" body=""
	I1206 10:50:45.472833  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:45.473168  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:45.972582  399286 type.go:168] "Request Body" body=""
	I1206 10:50:45.972658  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:45.972953  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:46.472685  399286 type.go:168] "Request Body" body=""
	I1206 10:50:46.472772  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:46.473240  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:46.972953  399286 type.go:168] "Request Body" body=""
	I1206 10:50:46.973034  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:46.973311  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:46.973368  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:47.473089  399286 type.go:168] "Request Body" body=""
	I1206 10:50:47.473162  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:47.473495  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:47.973337  399286 type.go:168] "Request Body" body=""
	I1206 10:50:47.973414  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:47.973765  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:48.472461  399286 type.go:168] "Request Body" body=""
	I1206 10:50:48.472532  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:48.472800  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:48.972484  399286 type.go:168] "Request Body" body=""
	I1206 10:50:48.972555  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:48.972856  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:49.472591  399286 type.go:168] "Request Body" body=""
	I1206 10:50:49.472674  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:49.472971  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:49.473017  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:49.972486  399286 type.go:168] "Request Body" body=""
	I1206 10:50:49.972555  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:49.972818  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:50.472595  399286 type.go:168] "Request Body" body=""
	I1206 10:50:50.472673  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:50.473012  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:50.972610  399286 type.go:168] "Request Body" body=""
	I1206 10:50:50.972682  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:50.973033  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:51.472648  399286 type.go:168] "Request Body" body=""
	I1206 10:50:51.472722  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:51.473053  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:51.473104  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:51.972927  399286 type.go:168] "Request Body" body=""
	I1206 10:50:51.973006  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:51.973306  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:52.473170  399286 type.go:168] "Request Body" body=""
	I1206 10:50:52.473264  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:52.473614  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:52.973403  399286 type.go:168] "Request Body" body=""
	I1206 10:50:52.973483  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:52.973779  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:53.472504  399286 type.go:168] "Request Body" body=""
	I1206 10:50:53.472613  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:53.472956  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:53.972692  399286 type.go:168] "Request Body" body=""
	I1206 10:50:53.972766  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:53.973130  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:53.973190  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:54.472528  399286 type.go:168] "Request Body" body=""
	I1206 10:50:54.472607  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:54.472878  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:54.972605  399286 type.go:168] "Request Body" body=""
	I1206 10:50:54.972688  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:54.973068  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:55.472818  399286 type.go:168] "Request Body" body=""
	I1206 10:50:55.472895  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:55.473202  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:55.972520  399286 type.go:168] "Request Body" body=""
	I1206 10:50:55.972603  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:55.972935  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:56.472654  399286 type.go:168] "Request Body" body=""
	I1206 10:50:56.472729  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:56.473032  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:56.473084  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:56.972842  399286 type.go:168] "Request Body" body=""
	I1206 10:50:56.972920  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:56.973318  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:57.473075  399286 type.go:168] "Request Body" body=""
	I1206 10:50:57.473143  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:57.473455  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:57.973290  399286 type.go:168] "Request Body" body=""
	I1206 10:50:57.973373  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:57.973726  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:58.472463  399286 type.go:168] "Request Body" body=""
	I1206 10:50:58.472542  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:58.472877  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:58.972566  399286 type.go:168] "Request Body" body=""
	I1206 10:50:58.972641  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:58.972980  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:50:58.973033  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:50:59.472570  399286 type.go:168] "Request Body" body=""
	I1206 10:50:59.472643  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:59.472941  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:50:59.972571  399286 type.go:168] "Request Body" body=""
	I1206 10:50:59.972657  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:50:59.973000  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:00.472549  399286 type.go:168] "Request Body" body=""
	I1206 10:51:00.472645  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:00.473014  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:00.972576  399286 type.go:168] "Request Body" body=""
	I1206 10:51:00.972652  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:00.972971  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:01.472606  399286 type.go:168] "Request Body" body=""
	I1206 10:51:01.472680  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:01.473017  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:51:01.473078  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:51:01.972762  399286 type.go:168] "Request Body" body=""
	I1206 10:51:01.972832  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:01.973108  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:02.472577  399286 type.go:168] "Request Body" body=""
	I1206 10:51:02.472675  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:02.473037  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:02.972793  399286 type.go:168] "Request Body" body=""
	I1206 10:51:02.972870  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:02.973217  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:03.472909  399286 type.go:168] "Request Body" body=""
	I1206 10:51:03.472990  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:03.473316  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:51:03.473367  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:51:03.973130  399286 type.go:168] "Request Body" body=""
	I1206 10:51:03.973216  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:03.973569  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:04.473254  399286 type.go:168] "Request Body" body=""
	I1206 10:51:04.473335  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:04.473708  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:04.973449  399286 type.go:168] "Request Body" body=""
	I1206 10:51:04.973548  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:04.973831  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:05.472562  399286 type.go:168] "Request Body" body=""
	I1206 10:51:05.472640  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:05.472982  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:05.972586  399286 type.go:168] "Request Body" body=""
	I1206 10:51:05.972670  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:05.973021  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1206 10:51:05.973092  399286 node_ready.go:55] error getting node "functional-196950" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-196950": dial tcp 192.168.49.2:8441: connect: connection refused
	I1206 10:51:06.472600  399286 type.go:168] "Request Body" body=""
	I1206 10:51:06.472689  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:06.473022  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:06.972909  399286 type.go:168] "Request Body" body=""
	I1206 10:51:06.972998  399286 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-196950" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1206 10:51:06.973336  399286 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1206 10:51:07.473123  399286 type.go:168] "Request Body" body=""
	I1206 10:51:07.473186  399286 node_ready.go:38] duration metric: took 6m0.000853216s for node "functional-196950" to be "Ready" ...
	I1206 10:51:07.476374  399286 out.go:203] 
	W1206 10:51:07.479349  399286 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1206 10:51:07.479391  399286 out.go:285] * 
	W1206 10:51:07.481554  399286 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 10:51:07.484691  399286 out.go:203] 
	
	
	==> CRI-O <==
	Dec 06 10:51:16 functional-196950 crio[5345]: time="2025-12-06T10:51:16.681090489Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=a6644682-b155-4975-9b4d-b3a25826b669 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:51:17 functional-196950 crio[5345]: time="2025-12-06T10:51:17.759167896Z" level=info msg="Checking image status: minikube-local-cache-test:functional-196950" id=de398f8f-3930-4a58-b990-1c3c5536fc97 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:51:17 functional-196950 crio[5345]: time="2025-12-06T10:51:17.759341149Z" level=info msg="Resolving \"minikube-local-cache-test\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 06 10:51:17 functional-196950 crio[5345]: time="2025-12-06T10:51:17.759414823Z" level=info msg="Image minikube-local-cache-test:functional-196950 not found" id=de398f8f-3930-4a58-b990-1c3c5536fc97 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:51:17 functional-196950 crio[5345]: time="2025-12-06T10:51:17.759504638Z" level=info msg="Neither image nor artfiact minikube-local-cache-test:functional-196950 found" id=de398f8f-3930-4a58-b990-1c3c5536fc97 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:51:17 functional-196950 crio[5345]: time="2025-12-06T10:51:17.800436231Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-196950" id=3839a3f9-7a20-49dc-a450-400650f95d3e name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:51:17 functional-196950 crio[5345]: time="2025-12-06T10:51:17.800583005Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-196950 not found" id=3839a3f9-7a20-49dc-a450-400650f95d3e name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:51:17 functional-196950 crio[5345]: time="2025-12-06T10:51:17.800624844Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-196950 found" id=3839a3f9-7a20-49dc-a450-400650f95d3e name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:51:17 functional-196950 crio[5345]: time="2025-12-06T10:51:17.825690578Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-196950" id=7bfa5ab4-ee23-448e-8c7c-6013d4d125e1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:51:17 functional-196950 crio[5345]: time="2025-12-06T10:51:17.825854969Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-196950 not found" id=7bfa5ab4-ee23-448e-8c7c-6013d4d125e1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:51:17 functional-196950 crio[5345]: time="2025-12-06T10:51:17.82590516Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-196950 found" id=7bfa5ab4-ee23-448e-8c7c-6013d4d125e1 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:51:18 functional-196950 crio[5345]: time="2025-12-06T10:51:18.834420397Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=b1480b3b-5d55-4e0e-a195-dda8a56ce4fe name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:51:19 functional-196950 crio[5345]: time="2025-12-06T10:51:19.174640107Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=3e7de2ab-8f6d-4650-af37-a5bf48c4f78c name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:51:19 functional-196950 crio[5345]: time="2025-12-06T10:51:19.174865644Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=3e7de2ab-8f6d-4650-af37-a5bf48c4f78c name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:51:19 functional-196950 crio[5345]: time="2025-12-06T10:51:19.174940475Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=3e7de2ab-8f6d-4650-af37-a5bf48c4f78c name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:51:19 functional-196950 crio[5345]: time="2025-12-06T10:51:19.752464757Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=f100257a-1b76-48a9-9d71-34fd29259970 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:51:19 functional-196950 crio[5345]: time="2025-12-06T10:51:19.752617653Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=f100257a-1b76-48a9-9d71-34fd29259970 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:51:19 functional-196950 crio[5345]: time="2025-12-06T10:51:19.752655446Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=f100257a-1b76-48a9-9d71-34fd29259970 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:51:19 functional-196950 crio[5345]: time="2025-12-06T10:51:19.778059202Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=6b4c32e1-a492-4fc8-b625-6d4935efa3a7 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:51:19 functional-196950 crio[5345]: time="2025-12-06T10:51:19.778214847Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=6b4c32e1-a492-4fc8-b625-6d4935efa3a7 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:51:19 functional-196950 crio[5345]: time="2025-12-06T10:51:19.77825968Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=6b4c32e1-a492-4fc8-b625-6d4935efa3a7 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:51:19 functional-196950 crio[5345]: time="2025-12-06T10:51:19.805594859Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=fc342d1a-0b15-4db6-884f-13091ec17c55 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:51:19 functional-196950 crio[5345]: time="2025-12-06T10:51:19.805730762Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=fc342d1a-0b15-4db6-884f-13091ec17c55 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:51:19 functional-196950 crio[5345]: time="2025-12-06T10:51:19.805765478Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=fc342d1a-0b15-4db6-884f-13091ec17c55 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:51:20 functional-196950 crio[5345]: time="2025-12-06T10:51:20.475486781Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=f433a295-3c0b-4836-ab7c-00cce7815750 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:51:24.610250    9535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:51:24.610727    9535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:51:24.612280    9535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:51:24.612708    9535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:51:24.614049    9535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014752] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503231] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.065820] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.901896] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.944603] kauditd_printk_skb: 39 callbacks suppressed
	[Dec 6 09:04] hrtimer: interrupt took 32230230 ns
	[Dec 6 10:25] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 6 10:26] overlayfs: idmapped layers are currently not supported
	[  +0.066821] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 6 10:32] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:33] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:51] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 10:51:24 up  2:33,  0 user,  load average: 0.69, 0.38, 0.88
	Linux functional-196950 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 10:51:22 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:51:22 functional-196950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 830.
	Dec 06 10:51:22 functional-196950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:22 functional-196950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:23 functional-196950 kubelet[9412]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:51:23 functional-196950 kubelet[9412]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:51:23 functional-196950 kubelet[9412]: E1206 10:51:23.024230    9412 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:51:23 functional-196950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:51:23 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:51:23 functional-196950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 831.
	Dec 06 10:51:23 functional-196950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:23 functional-196950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:23 functional-196950 kubelet[9446]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:51:23 functional-196950 kubelet[9446]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:51:23 functional-196950 kubelet[9446]: E1206 10:51:23.779952    9446 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:51:23 functional-196950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:51:23 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 10:51:24 functional-196950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 832.
	Dec 06 10:51:24 functional-196950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:24 functional-196950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 10:51:24 functional-196950 kubelet[9517]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:51:24 functional-196950 kubelet[9517]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 10:51:24 functional-196950 kubelet[9517]: E1206 10:51:24.536872    9517 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 10:51:24 functional-196950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 10:51:24 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-196950 -n functional-196950
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-196950 -n functional-196950: exit status 2 (355.083277ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-196950" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (735.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-196950 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1206 10:53:41.817806  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:55:45.365013  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-205266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:57:08.435544  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-205266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:58:41.815545  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:00:45.365989  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-205266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-196950 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m12.890217353s)

                                                
                                                
-- stdout --
	* [functional-196950] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-362985/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-362985/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-196950" primary control-plane node in "functional-196950" cluster
	* Pulling base image v0.0.48-1764843390-22032 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000270227s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000448393s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000448393s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-arm64 start -p functional-196950 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m12.891456954s for "functional-196950" cluster.
I1206 11:03:38.507506  364855 config.go:182] Loaded profile config "functional-196950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-196950
helpers_test.go:243: (dbg) docker inspect functional-196950:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1",
	        "Created": "2025-12-06T10:36:45.201779678Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 393848,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T10:36:45.318229053Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/hostname",
	        "HostsPath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/hosts",
	        "LogPath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1-json.log",
	        "Name": "/functional-196950",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-196950:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-196950",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1",
	                "LowerDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1-init/diff:/var/lib/docker/overlay2/5011226d55616c9977b14c1fe617d1302fe59373df05ce8ec6e21b79143a1c57/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-196950",
	                "Source": "/var/lib/docker/volumes/functional-196950/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-196950",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-196950",
	                "name.minikube.sigs.k8s.io": "functional-196950",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9b8f961d55d7529aed7b841f2ac9f818c22ff12b8ad73f2d6bcee22656d9749a",
	            "SandboxKey": "/var/run/docker/netns/9b8f961d55d7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33158"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-196950": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:c1:40:2a:93:47",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a566bfdfd33a868cf61e5b18b36cbd55e9868f24cbb091e055ae606aeb8c6f03",
	                    "EndpointID": "452fe32bde0c42c4c35d700488ae93aeecc6c6a971ac6f1a8a492dbc4b328ed9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-196950",
	                        "d150aac7296d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-196950 -n functional-196950
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-196950 -n functional-196950: exit status 2 (336.899089ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-196950 logs -n 25: (1.00065893s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-205266 image ls --format yaml --alsologtostderr                                                                                        │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh     │ functional-205266 ssh pgrep buildkitd                                                                                                             │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │                     │
	│ image   │ functional-205266 image ls --format json --alsologtostderr                                                                                        │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image   │ functional-205266 image build -t localhost/my-image:functional-205266 testdata/build --alsologtostderr                                            │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image   │ functional-205266 image ls --format table --alsologtostderr                                                                                       │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image   │ functional-205266 image ls                                                                                                                        │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ delete  │ -p functional-205266                                                                                                                              │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ start   │ -p functional-196950 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │                     │
	│ start   │ -p functional-196950 --alsologtostderr -v=8                                                                                                       │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:45 UTC │                     │
	│ cache   │ functional-196950 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ cache   │ functional-196950 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ cache   │ functional-196950 cache add registry.k8s.io/pause:latest                                                                                          │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ cache   │ functional-196950 cache add minikube-local-cache-test:functional-196950                                                                           │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ cache   │ functional-196950 cache delete minikube-local-cache-test:functional-196950                                                                        │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ ssh     │ functional-196950 ssh sudo crictl images                                                                                                          │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ ssh     │ functional-196950 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ ssh     │ functional-196950 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │                     │
	│ cache   │ functional-196950 cache reload                                                                                                                    │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ ssh     │ functional-196950 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ kubectl │ functional-196950 kubectl -- --context functional-196950 get pods                                                                                 │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │                     │
	│ start   │ -p functional-196950 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                          │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:51:25
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:51:25.658528  405191 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:51:25.659862  405191 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:51:25.659873  405191 out.go:374] Setting ErrFile to fd 2...
	I1206 10:51:25.659879  405191 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:51:25.660272  405191 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 10:51:25.660784  405191 out.go:368] Setting JSON to false
	I1206 10:51:25.661671  405191 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9237,"bootTime":1765009049,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 10:51:25.661825  405191 start.go:143] virtualization:  
	I1206 10:51:25.665170  405191 out.go:179] * [functional-196950] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:51:25.668974  405191 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 10:51:25.669057  405191 notify.go:221] Checking for updates...
	I1206 10:51:25.674658  405191 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:51:25.677504  405191 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 10:51:25.680242  405191 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-362985/.minikube
	I1206 10:51:25.683061  405191 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:51:25.685807  405191 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:51:25.689056  405191 config.go:182] Loaded profile config "functional-196950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:51:25.689150  405191 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:51:25.719603  405191 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:51:25.719706  405191 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:51:25.776170  405191 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-06 10:51:25.766414658 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:51:25.776279  405191 docker.go:319] overlay module found
	I1206 10:51:25.779319  405191 out.go:179] * Using the docker driver based on existing profile
	I1206 10:51:25.782157  405191 start.go:309] selected driver: docker
	I1206 10:51:25.782168  405191 start.go:927] validating driver "docker" against &{Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:51:25.782268  405191 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:51:25.782379  405191 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:51:25.843232  405191 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-06 10:51:25.834027648 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:51:25.843742  405191 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 10:51:25.843762  405191 cni.go:84] Creating CNI manager for ""
	I1206 10:51:25.843817  405191 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:51:25.843868  405191 start.go:353] cluster config:
	{Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:51:25.846980  405191 out.go:179] * Starting "functional-196950" primary control-plane node in "functional-196950" cluster
	I1206 10:51:25.849840  405191 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 10:51:25.852721  405191 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:51:25.855512  405191 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 10:51:25.855549  405191 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1206 10:51:25.855557  405191 cache.go:65] Caching tarball of preloaded images
	I1206 10:51:25.855585  405191 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:51:25.855649  405191 preload.go:238] Found /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1206 10:51:25.855670  405191 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1206 10:51:25.855775  405191 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/config.json ...
	I1206 10:51:25.875281  405191 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 10:51:25.875292  405191 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 10:51:25.875312  405191 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:51:25.875342  405191 start.go:360] acquireMachinesLock for functional-196950: {Name:mkd2471f275d1d2a438cb4ce89f1d1521a0fb340 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:51:25.875462  405191 start.go:364] duration metric: took 100.145µs to acquireMachinesLock for "functional-196950"
	I1206 10:51:25.875483  405191 start.go:96] Skipping create...Using existing machine configuration
	I1206 10:51:25.875487  405191 fix.go:54] fixHost starting: 
	I1206 10:51:25.875763  405191 cli_runner.go:164] Run: docker container inspect functional-196950 --format={{.State.Status}}
	I1206 10:51:25.893454  405191 fix.go:112] recreateIfNeeded on functional-196950: state=Running err=<nil>
	W1206 10:51:25.893482  405191 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 10:51:25.896578  405191 out.go:252] * Updating the running docker "functional-196950" container ...
	I1206 10:51:25.896608  405191 machine.go:94] provisionDockerMachine start ...
	I1206 10:51:25.896697  405191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:51:25.913940  405191 main.go:143] libmachine: Using SSH client type: native
	I1206 10:51:25.914320  405191 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1206 10:51:25.914327  405191 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:51:26.075155  405191 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-196950
	
	I1206 10:51:26.075169  405191 ubuntu.go:182] provisioning hostname "functional-196950"
	I1206 10:51:26.075252  405191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:51:26.094744  405191 main.go:143] libmachine: Using SSH client type: native
	I1206 10:51:26.095070  405191 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1206 10:51:26.095080  405191 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-196950 && echo "functional-196950" | sudo tee /etc/hostname
	I1206 10:51:26.261114  405191 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-196950
	
	I1206 10:51:26.261197  405191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:51:26.279848  405191 main.go:143] libmachine: Using SSH client type: native
	I1206 10:51:26.280166  405191 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1206 10:51:26.280180  405191 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-196950' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-196950/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-196950' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:51:26.431933  405191 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:51:26.431953  405191 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-362985/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-362985/.minikube}
	I1206 10:51:26.431971  405191 ubuntu.go:190] setting up certificates
	I1206 10:51:26.431995  405191 provision.go:84] configureAuth start
	I1206 10:51:26.432056  405191 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-196950
	I1206 10:51:26.450343  405191 provision.go:143] copyHostCerts
	I1206 10:51:26.450415  405191 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem, removing ...
	I1206 10:51:26.450432  405191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem
	I1206 10:51:26.450505  405191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem (1082 bytes)
	I1206 10:51:26.450607  405191 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem, removing ...
	I1206 10:51:26.450611  405191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem
	I1206 10:51:26.450636  405191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem (1123 bytes)
	I1206 10:51:26.450689  405191 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem, removing ...
	I1206 10:51:26.450693  405191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem
	I1206 10:51:26.450714  405191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem (1679 bytes)
	I1206 10:51:26.450755  405191 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem org=jenkins.functional-196950 san=[127.0.0.1 192.168.49.2 functional-196950 localhost minikube]
	I1206 10:51:26.540911  405191 provision.go:177] copyRemoteCerts
	I1206 10:51:26.540967  405191 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:51:26.541011  405191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:51:26.559000  405191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:51:26.664415  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 10:51:26.682850  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 10:51:26.700635  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 10:51:26.720260  405191 provision.go:87] duration metric: took 288.251554ms to configureAuth
	I1206 10:51:26.720277  405191 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:51:26.720482  405191 config.go:182] Loaded profile config "functional-196950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:51:26.720577  405191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:51:26.740294  405191 main.go:143] libmachine: Using SSH client type: native
	I1206 10:51:26.740607  405191 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1206 10:51:26.740618  405191 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 10:51:27.107160  405191 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 10:51:27.107175  405191 machine.go:97] duration metric: took 1.210560762s to provisionDockerMachine
	I1206 10:51:27.107185  405191 start.go:293] postStartSetup for "functional-196950" (driver="docker")
	I1206 10:51:27.107196  405191 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:51:27.107253  405191 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:51:27.107294  405191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:51:27.129039  405191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:51:27.236148  405191 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:51:27.240016  405191 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:51:27.240036  405191 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:51:27.240047  405191 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-362985/.minikube/addons for local assets ...
	I1206 10:51:27.240125  405191 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-362985/.minikube/files for local assets ...
	I1206 10:51:27.240216  405191 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem -> 3648552.pem in /etc/ssl/certs
	I1206 10:51:27.240311  405191 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/test/nested/copy/364855/hosts -> hosts in /etc/test/nested/copy/364855
	I1206 10:51:27.240389  405191 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/364855
	I1206 10:51:27.248525  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem --> /etc/ssl/certs/3648552.pem (1708 bytes)
	I1206 10:51:27.267246  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/test/nested/copy/364855/hosts --> /etc/test/nested/copy/364855/hosts (40 bytes)
	I1206 10:51:27.285080  405191 start.go:296] duration metric: took 177.880099ms for postStartSetup
	I1206 10:51:27.285152  405191 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:51:27.285189  405191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:51:27.302563  405191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:51:27.404400  405191 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:51:27.408968  405191 fix.go:56] duration metric: took 1.533473357s for fixHost
	I1206 10:51:27.408984  405191 start.go:83] releasing machines lock for "functional-196950", held for 1.533513702s
	I1206 10:51:27.409052  405191 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-196950
	I1206 10:51:27.427444  405191 ssh_runner.go:195] Run: cat /version.json
	I1206 10:51:27.427475  405191 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:51:27.427488  405191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:51:27.427532  405191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:51:27.449136  405191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:51:27.450292  405191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:51:27.555364  405191 ssh_runner.go:195] Run: systemctl --version
	I1206 10:51:27.645936  405191 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 10:51:27.683240  405191 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 10:51:27.687562  405191 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:51:27.687626  405191 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:51:27.695460  405191 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 10:51:27.695474  405191 start.go:496] detecting cgroup driver to use...
	I1206 10:51:27.695505  405191 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:51:27.695551  405191 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 10:51:27.711018  405191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 10:51:27.724651  405191 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:51:27.724707  405191 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:51:27.740806  405191 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:51:27.754100  405191 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:51:27.883046  405191 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:51:28.013378  405191 docker.go:234] disabling docker service ...
	I1206 10:51:28.013440  405191 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:51:28.030310  405191 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:51:28.044424  405191 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:51:28.162200  405191 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:51:28.315775  405191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:51:28.333888  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:51:28.350625  405191 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 10:51:28.350700  405191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:51:28.360184  405191 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 10:51:28.360243  405191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:51:28.369224  405191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:51:28.378656  405191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:51:28.387862  405191 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:51:28.396244  405191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:51:28.405446  405191 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:51:28.414057  405191 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:51:28.423226  405191 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:51:28.430865  405191 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:51:28.438644  405191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:51:28.553737  405191 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 10:51:28.722710  405191 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 10:51:28.722782  405191 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 10:51:28.727796  405191 start.go:564] Will wait 60s for crictl version
	I1206 10:51:28.727854  405191 ssh_runner.go:195] Run: which crictl
	I1206 10:51:28.731603  405191 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:51:28.757634  405191 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 10:51:28.757708  405191 ssh_runner.go:195] Run: crio --version
	I1206 10:51:28.786864  405191 ssh_runner.go:195] Run: crio --version
	I1206 10:51:28.819624  405191 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1206 10:51:28.822438  405191 cli_runner.go:164] Run: docker network inspect functional-196950 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:51:28.838919  405191 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 10:51:28.845850  405191 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1206 10:51:28.848840  405191 kubeadm.go:884] updating cluster {Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:51:28.848980  405191 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 10:51:28.849059  405191 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:51:28.884770  405191 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:51:28.884782  405191 crio.go:433] Images already preloaded, skipping extraction
	I1206 10:51:28.884839  405191 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:51:28.911560  405191 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:51:28.911574  405191 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:51:28.911581  405191 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1206 10:51:28.911685  405191 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-196950 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:51:28.911771  405191 ssh_runner.go:195] Run: crio config
	I1206 10:51:28.966566  405191 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1206 10:51:28.966595  405191 cni.go:84] Creating CNI manager for ""
	I1206 10:51:28.966604  405191 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:51:28.966619  405191 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 10:51:28.966641  405191 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-196950 NodeName:functional-196950 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:51:28.966791  405191 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-196950"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:51:28.966870  405191 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 10:51:28.978798  405191 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:51:28.978872  405191 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:51:28.987304  405191 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1206 10:51:29.001847  405191 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 10:51:29.017577  405191 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1206 10:51:29.031751  405191 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:51:29.036513  405191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:51:29.155805  405191 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:51:29.722153  405191 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950 for IP: 192.168.49.2
	I1206 10:51:29.722163  405191 certs.go:195] generating shared ca certs ...
	I1206 10:51:29.722178  405191 certs.go:227] acquiring lock for ca certs: {Name:mke2ec61a37b6f3abbcbeb9abd23d6a19d011dd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:51:29.722312  405191 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.key
	I1206 10:51:29.722350  405191 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.key
	I1206 10:51:29.722357  405191 certs.go:257] generating profile certs ...
	I1206 10:51:29.722458  405191 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.key
	I1206 10:51:29.722506  405191 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.key.a77b39a6
	I1206 10:51:29.722550  405191 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.key
	I1206 10:51:29.722659  405191 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855.pem (1338 bytes)
	W1206 10:51:29.722686  405191 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855_empty.pem, impossibly tiny 0 bytes
	I1206 10:51:29.722693  405191 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 10:51:29.722721  405191 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem (1082 bytes)
	I1206 10:51:29.722747  405191 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:51:29.722776  405191 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem (1679 bytes)
	I1206 10:51:29.722816  405191 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem (1708 bytes)
	I1206 10:51:29.723422  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:51:29.745118  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 10:51:29.764772  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:51:29.783979  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 10:51:29.803249  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 10:51:29.821820  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 10:51:29.840052  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:51:29.858172  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 10:51:29.876447  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:51:29.894619  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855.pem --> /usr/share/ca-certificates/364855.pem (1338 bytes)
	I1206 10:51:29.912710  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem --> /usr/share/ca-certificates/3648552.pem (1708 bytes)
	I1206 10:51:29.930993  405191 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:51:29.944776  405191 ssh_runner.go:195] Run: openssl version
	I1206 10:51:29.951232  405191 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:51:29.958913  405191 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:51:29.966922  405191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:51:29.970672  405191 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:26 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:51:29.970730  405191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:51:30.016305  405191 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:51:30.031889  405191 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364855.pem
	I1206 10:51:30.048455  405191 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364855.pem /etc/ssl/certs/364855.pem
	I1206 10:51:30.063564  405191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364855.pem
	I1206 10:51:30.076207  405191 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:36 /usr/share/ca-certificates/364855.pem
	I1206 10:51:30.076271  405191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364855.pem
	I1206 10:51:30.128156  405191 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 10:51:30.136853  405191 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648552.pem
	I1206 10:51:30.146061  405191 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648552.pem /etc/ssl/certs/3648552.pem
	I1206 10:51:30.154785  405191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648552.pem
	I1206 10:51:30.159209  405191 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:36 /usr/share/ca-certificates/3648552.pem
	I1206 10:51:30.159296  405191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648552.pem
	I1206 10:51:30.202450  405191 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 10:51:30.210421  405191 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:51:30.214689  405191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 10:51:30.257294  405191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 10:51:30.301161  405191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 10:51:30.342552  405191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 10:51:30.384443  405191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 10:51:30.426153  405191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 10:51:30.467193  405191 kubeadm.go:401] StartCluster: {Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:51:30.467269  405191 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:51:30.467336  405191 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:51:30.505294  405191 cri.go:89] found id: ""
	I1206 10:51:30.505356  405191 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:51:30.514317  405191 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 10:51:30.514327  405191 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 10:51:30.514378  405191 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 10:51:30.522953  405191 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:51:30.523619  405191 kubeconfig.go:125] found "functional-196950" server: "https://192.168.49.2:8441"
	I1206 10:51:30.525284  405191 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 10:51:30.535655  405191 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-06 10:36:53.608460602 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-06 10:51:29.025529796 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1206 10:51:30.535667  405191 kubeadm.go:1161] stopping kube-system containers ...
	I1206 10:51:30.535679  405191 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1206 10:51:30.535750  405191 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:51:30.563289  405191 cri.go:89] found id: ""
	I1206 10:51:30.563367  405191 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 10:51:30.577669  405191 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:51:30.585599  405191 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec  6 10:40 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  6 10:40 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  6 10:40 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec  6 10:40 /etc/kubernetes/scheduler.conf
	
	I1206 10:51:30.585661  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:51:30.593607  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:51:30.601561  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:51:30.601615  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:51:30.609082  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:51:30.616706  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:51:30.616764  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:51:30.624576  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:51:30.632333  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:51:30.632396  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:51:30.640022  405191 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 10:51:30.648015  405191 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:51:30.694279  405191 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:51:31.789747  405191 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.095443049s)
	I1206 10:51:31.789807  405191 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:51:31.992373  405191 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:51:32.066243  405191 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:51:32.115098  405191 api_server.go:52] waiting for apiserver process to appear ...
	I1206 10:51:32.115193  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:32.616025  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:33.115328  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:33.615777  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:34.116234  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:34.616203  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:35.115628  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:35.616081  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:36.116020  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:36.616269  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:37.115484  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:37.615419  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:38.115405  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:38.615272  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:39.115398  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:39.615355  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:40.115498  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:40.615726  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:41.116068  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:41.615318  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:42.116188  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:42.615408  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:43.116174  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:43.616150  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:44.115863  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:44.616112  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:45.115433  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:45.615358  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:46.115254  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:46.615554  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:47.116219  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:47.615907  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:48.115484  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:48.615750  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:49.115717  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:49.615630  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:50.115975  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:50.615777  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:51.116004  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:51.615732  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:52.115255  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:52.616222  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:53.115944  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:53.616128  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:54.115370  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:54.616204  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:55.116093  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:55.616070  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:56.116312  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:56.616205  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:57.116056  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:57.616042  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:58.116102  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:58.616065  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:59.115989  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:59.615683  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:00.115492  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:00.616604  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:01.115972  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:01.615689  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:02.116000  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:02.615289  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:03.116299  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:03.615451  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:04.115353  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:04.615302  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:05.115836  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:05.616105  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:06.115987  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:06.615950  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:07.116145  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:07.615538  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:08.115408  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:08.616204  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:09.116054  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:09.615547  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:10.115395  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:10.616209  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:11.115978  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:11.616260  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:12.115320  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:12.616287  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:13.115459  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:13.615480  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:14.116026  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:14.615286  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:15.116132  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:15.615307  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:16.116269  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:16.616317  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:17.115290  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:17.615531  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:18.115402  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:18.615434  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:19.115328  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:19.615503  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:20.115398  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:20.616128  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:21.115363  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:21.615736  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:22.115418  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:22.616278  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:23.115418  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:23.616297  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:24.115428  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:24.615397  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:25.115674  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:25.615431  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:26.116295  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:26.615282  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:27.115737  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:27.615537  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:28.115556  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:28.615304  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:29.115439  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:29.615331  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:30.116125  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:30.615932  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:31.115423  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:31.616201  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:32.116069  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:32.116145  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:32.141390  405191 cri.go:89] found id: ""
	I1206 10:52:32.141404  405191 logs.go:282] 0 containers: []
	W1206 10:52:32.141411  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:32.141416  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:32.141473  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:32.166484  405191 cri.go:89] found id: ""
	I1206 10:52:32.166497  405191 logs.go:282] 0 containers: []
	W1206 10:52:32.166504  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:32.166509  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:32.166565  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:32.194996  405191 cri.go:89] found id: ""
	I1206 10:52:32.195009  405191 logs.go:282] 0 containers: []
	W1206 10:52:32.195016  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:32.195021  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:32.195076  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:32.221300  405191 cri.go:89] found id: ""
	I1206 10:52:32.221313  405191 logs.go:282] 0 containers: []
	W1206 10:52:32.221321  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:32.221326  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:32.221382  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:32.247157  405191 cri.go:89] found id: ""
	I1206 10:52:32.247171  405191 logs.go:282] 0 containers: []
	W1206 10:52:32.247178  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:32.247201  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:32.247261  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:32.272996  405191 cri.go:89] found id: ""
	I1206 10:52:32.273011  405191 logs.go:282] 0 containers: []
	W1206 10:52:32.273018  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:32.273023  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:32.273087  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:32.298872  405191 cri.go:89] found id: ""
	I1206 10:52:32.298885  405191 logs.go:282] 0 containers: []
	W1206 10:52:32.298892  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:32.298899  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:32.298909  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:32.365036  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:32.365056  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:32.380152  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:32.380168  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:32.448480  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:32.439513   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:32.440191   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:32.441917   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:32.442441   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:32.444184   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:32.439513   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:32.440191   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:32.441917   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:32.442441   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:32.444184   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:32.448508  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:32.448519  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:32.521363  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:32.521385  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:52:35.051557  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:35.061829  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:35.061887  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:35.090093  405191 cri.go:89] found id: ""
	I1206 10:52:35.090109  405191 logs.go:282] 0 containers: []
	W1206 10:52:35.090116  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:35.090123  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:35.090185  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:35.120692  405191 cri.go:89] found id: ""
	I1206 10:52:35.120706  405191 logs.go:282] 0 containers: []
	W1206 10:52:35.120713  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:35.120718  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:35.120781  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:35.150871  405191 cri.go:89] found id: ""
	I1206 10:52:35.150885  405191 logs.go:282] 0 containers: []
	W1206 10:52:35.150895  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:35.150901  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:35.150966  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:35.178176  405191 cri.go:89] found id: ""
	I1206 10:52:35.178189  405191 logs.go:282] 0 containers: []
	W1206 10:52:35.178196  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:35.178201  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:35.178259  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:35.203836  405191 cri.go:89] found id: ""
	I1206 10:52:35.203851  405191 logs.go:282] 0 containers: []
	W1206 10:52:35.203858  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:35.203864  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:35.203922  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:35.229838  405191 cri.go:89] found id: ""
	I1206 10:52:35.229852  405191 logs.go:282] 0 containers: []
	W1206 10:52:35.229860  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:35.229865  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:35.229923  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:35.255728  405191 cri.go:89] found id: ""
	I1206 10:52:35.255742  405191 logs.go:282] 0 containers: []
	W1206 10:52:35.255749  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:35.255763  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:35.255774  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:35.326293  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:35.326313  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:35.341587  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:35.341603  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:35.406128  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:35.396962   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:35.397407   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:35.399334   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:35.399842   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:35.401729   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:35.396962   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:35.397407   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:35.399334   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:35.399842   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:35.401729   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:35.406138  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:35.406148  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:35.477539  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:35.477561  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:52:38.012461  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:38.026662  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:38.026746  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:38.057501  405191 cri.go:89] found id: ""
	I1206 10:52:38.057514  405191 logs.go:282] 0 containers: []
	W1206 10:52:38.057522  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:38.057527  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:38.057597  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:38.087721  405191 cri.go:89] found id: ""
	I1206 10:52:38.087736  405191 logs.go:282] 0 containers: []
	W1206 10:52:38.087744  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:38.087750  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:38.087812  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:38.115539  405191 cri.go:89] found id: ""
	I1206 10:52:38.115553  405191 logs.go:282] 0 containers: []
	W1206 10:52:38.115560  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:38.115566  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:38.115624  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:38.140812  405191 cri.go:89] found id: ""
	I1206 10:52:38.140826  405191 logs.go:282] 0 containers: []
	W1206 10:52:38.140833  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:38.140838  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:38.140896  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:38.166576  405191 cri.go:89] found id: ""
	I1206 10:52:38.166590  405191 logs.go:282] 0 containers: []
	W1206 10:52:38.166597  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:38.166602  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:38.166662  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:38.191851  405191 cri.go:89] found id: ""
	I1206 10:52:38.191864  405191 logs.go:282] 0 containers: []
	W1206 10:52:38.191871  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:38.191876  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:38.191933  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:38.217461  405191 cri.go:89] found id: ""
	I1206 10:52:38.217475  405191 logs.go:282] 0 containers: []
	W1206 10:52:38.217482  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:38.217490  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:38.217502  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:38.232449  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:38.232465  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:38.295220  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:38.286931   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:38.287615   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:38.289268   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:38.289707   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:38.291283   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:38.286931   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:38.287615   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:38.289268   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:38.289707   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:38.291283   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:38.295242  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:38.295255  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:38.363789  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:38.363809  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:52:38.393298  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:38.393313  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:40.963508  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:40.975400  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:40.975471  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:41.012381  405191 cri.go:89] found id: ""
	I1206 10:52:41.012396  405191 logs.go:282] 0 containers: []
	W1206 10:52:41.012403  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:41.012409  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:41.012481  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:41.045820  405191 cri.go:89] found id: ""
	I1206 10:52:41.045833  405191 logs.go:282] 0 containers: []
	W1206 10:52:41.045840  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:41.045845  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:41.045905  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:41.072220  405191 cri.go:89] found id: ""
	I1206 10:52:41.072234  405191 logs.go:282] 0 containers: []
	W1206 10:52:41.072241  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:41.072246  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:41.072315  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:41.099263  405191 cri.go:89] found id: ""
	I1206 10:52:41.099289  405191 logs.go:282] 0 containers: []
	W1206 10:52:41.099297  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:41.099302  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:41.099400  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:41.125321  405191 cri.go:89] found id: ""
	I1206 10:52:41.125335  405191 logs.go:282] 0 containers: []
	W1206 10:52:41.125342  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:41.125347  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:41.125407  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:41.151976  405191 cri.go:89] found id: ""
	I1206 10:52:41.151991  405191 logs.go:282] 0 containers: []
	W1206 10:52:41.151998  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:41.152004  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:41.152071  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:41.182220  405191 cri.go:89] found id: ""
	I1206 10:52:41.182246  405191 logs.go:282] 0 containers: []
	W1206 10:52:41.182254  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:41.182262  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:41.182276  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:41.248526  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:41.239066   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:41.239904   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:41.241505   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:41.242002   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:41.243768   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:41.239066   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:41.239904   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:41.241505   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:41.242002   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:41.243768   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:41.248580  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:41.248592  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:41.318224  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:41.318245  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:52:41.351350  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:41.351366  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:41.419147  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:41.419175  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:43.934479  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:43.945219  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:43.945319  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:43.977434  405191 cri.go:89] found id: ""
	I1206 10:52:43.977447  405191 logs.go:282] 0 containers: []
	W1206 10:52:43.977455  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:43.977460  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:43.977521  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:44.023455  405191 cri.go:89] found id: ""
	I1206 10:52:44.023469  405191 logs.go:282] 0 containers: []
	W1206 10:52:44.023476  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:44.023481  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:44.023547  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:44.054515  405191 cri.go:89] found id: ""
	I1206 10:52:44.054528  405191 logs.go:282] 0 containers: []
	W1206 10:52:44.054535  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:44.054542  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:44.054606  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:44.081078  405191 cri.go:89] found id: ""
	I1206 10:52:44.081092  405191 logs.go:282] 0 containers: []
	W1206 10:52:44.081100  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:44.081105  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:44.081169  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:44.107423  405191 cri.go:89] found id: ""
	I1206 10:52:44.107437  405191 logs.go:282] 0 containers: []
	W1206 10:52:44.107451  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:44.107456  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:44.107514  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:44.134813  405191 cri.go:89] found id: ""
	I1206 10:52:44.134827  405191 logs.go:282] 0 containers: []
	W1206 10:52:44.134834  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:44.134839  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:44.134901  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:44.160796  405191 cri.go:89] found id: ""
	I1206 10:52:44.160816  405191 logs.go:282] 0 containers: []
	W1206 10:52:44.160824  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:44.160831  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:44.160842  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:52:44.190778  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:44.190796  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:44.257562  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:44.257581  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:44.272647  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:44.272663  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:44.338023  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:44.329392   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:44.330156   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:44.331823   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:44.332332   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:44.333956   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:44.329392   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:44.330156   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:44.331823   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:44.332332   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:44.333956   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:44.338033  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:44.338043  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:46.906964  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:46.917503  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:46.917559  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:46.949168  405191 cri.go:89] found id: ""
	I1206 10:52:46.949182  405191 logs.go:282] 0 containers: []
	W1206 10:52:46.949189  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:46.949194  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:46.949253  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:46.981111  405191 cri.go:89] found id: ""
	I1206 10:52:46.981124  405191 logs.go:282] 0 containers: []
	W1206 10:52:46.981131  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:46.981136  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:46.981196  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:47.022951  405191 cri.go:89] found id: ""
	I1206 10:52:47.022965  405191 logs.go:282] 0 containers: []
	W1206 10:52:47.022972  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:47.022977  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:47.023037  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:47.052856  405191 cri.go:89] found id: ""
	I1206 10:52:47.052870  405191 logs.go:282] 0 containers: []
	W1206 10:52:47.052886  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:47.052891  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:47.052967  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:47.083787  405191 cri.go:89] found id: ""
	I1206 10:52:47.083800  405191 logs.go:282] 0 containers: []
	W1206 10:52:47.083807  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:47.083813  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:47.083870  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:47.109033  405191 cri.go:89] found id: ""
	I1206 10:52:47.109046  405191 logs.go:282] 0 containers: []
	W1206 10:52:47.109054  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:47.109059  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:47.109115  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:47.139758  405191 cri.go:89] found id: ""
	I1206 10:52:47.139772  405191 logs.go:282] 0 containers: []
	W1206 10:52:47.139779  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:47.139788  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:47.139798  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:47.154866  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:47.154884  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:47.221813  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:47.213688   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:47.214230   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:47.215830   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:47.216327   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:47.217906   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:47.213688   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:47.214230   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:47.215830   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:47.216327   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:47.217906   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:47.221824  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:47.221835  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:47.290233  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:47.290253  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:52:47.321014  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:47.321036  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:49.890726  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:49.902627  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:49.902688  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:49.929201  405191 cri.go:89] found id: ""
	I1206 10:52:49.929215  405191 logs.go:282] 0 containers: []
	W1206 10:52:49.929224  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:49.929230  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:49.929290  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:49.956185  405191 cri.go:89] found id: ""
	I1206 10:52:49.956198  405191 logs.go:282] 0 containers: []
	W1206 10:52:49.956205  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:49.956210  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:49.956269  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:49.993314  405191 cri.go:89] found id: ""
	I1206 10:52:49.993329  405191 logs.go:282] 0 containers: []
	W1206 10:52:49.993336  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:49.993343  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:49.993403  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:50.037379  405191 cri.go:89] found id: ""
	I1206 10:52:50.037395  405191 logs.go:282] 0 containers: []
	W1206 10:52:50.037403  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:50.037409  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:50.037472  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:50.067336  405191 cri.go:89] found id: ""
	I1206 10:52:50.067351  405191 logs.go:282] 0 containers: []
	W1206 10:52:50.067358  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:50.067363  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:50.067469  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:50.094997  405191 cri.go:89] found id: ""
	I1206 10:52:50.095010  405191 logs.go:282] 0 containers: []
	W1206 10:52:50.095018  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:50.095023  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:50.095087  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:50.122233  405191 cri.go:89] found id: ""
	I1206 10:52:50.122247  405191 logs.go:282] 0 containers: []
	W1206 10:52:50.122254  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:50.122262  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:50.122274  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:50.137790  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:50.137811  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:50.201020  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:50.192768   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:50.193599   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:50.195170   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:50.195719   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:50.197320   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:50.192768   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:50.193599   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:50.195170   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:50.195719   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:50.197320   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:50.201031  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:50.201041  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:50.275122  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:50.275142  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:52:50.303756  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:50.303777  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:52.872285  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:52.882349  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:52.882406  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:52.911618  405191 cri.go:89] found id: ""
	I1206 10:52:52.911631  405191 logs.go:282] 0 containers: []
	W1206 10:52:52.911638  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:52.911644  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:52.911705  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:52.937062  405191 cri.go:89] found id: ""
	I1206 10:52:52.937077  405191 logs.go:282] 0 containers: []
	W1206 10:52:52.937084  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:52.937089  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:52.937149  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:52.963326  405191 cri.go:89] found id: ""
	I1206 10:52:52.963340  405191 logs.go:282] 0 containers: []
	W1206 10:52:52.963347  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:52.963352  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:52.963437  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:52.997061  405191 cri.go:89] found id: ""
	I1206 10:52:52.997074  405191 logs.go:282] 0 containers: []
	W1206 10:52:52.997081  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:52.997086  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:52.997149  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:53.035456  405191 cri.go:89] found id: ""
	I1206 10:52:53.035469  405191 logs.go:282] 0 containers: []
	W1206 10:52:53.035477  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:53.035483  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:53.035543  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:53.063687  405191 cri.go:89] found id: ""
	I1206 10:52:53.063700  405191 logs.go:282] 0 containers: []
	W1206 10:52:53.063707  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:53.063712  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:53.063770  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:53.089131  405191 cri.go:89] found id: ""
	I1206 10:52:53.089145  405191 logs.go:282] 0 containers: []
	W1206 10:52:53.089152  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:53.089161  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:53.089180  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:53.154130  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:53.145768   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:53.146202   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:53.147939   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:53.148440   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:53.150128   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:53.145768   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:53.146202   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:53.147939   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:53.148440   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:53.150128   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:53.154142  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:53.154153  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:53.226211  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:53.226231  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:52:53.255876  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:53.255893  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:53.328864  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:53.328884  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:55.844855  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:55.855173  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:55.855232  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:55.882003  405191 cri.go:89] found id: ""
	I1206 10:52:55.882016  405191 logs.go:282] 0 containers: []
	W1206 10:52:55.882037  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:55.882043  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:55.882102  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:55.906679  405191 cri.go:89] found id: ""
	I1206 10:52:55.906693  405191 logs.go:282] 0 containers: []
	W1206 10:52:55.906700  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:55.906705  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:55.906763  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:55.932742  405191 cri.go:89] found id: ""
	I1206 10:52:55.932756  405191 logs.go:282] 0 containers: []
	W1206 10:52:55.932763  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:55.932769  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:55.932830  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:55.959084  405191 cri.go:89] found id: ""
	I1206 10:52:55.959097  405191 logs.go:282] 0 containers: []
	W1206 10:52:55.959104  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:55.959109  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:55.959167  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:56.001438  405191 cri.go:89] found id: ""
	I1206 10:52:56.001453  405191 logs.go:282] 0 containers: []
	W1206 10:52:56.001461  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:56.001467  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:56.001540  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:56.039276  405191 cri.go:89] found id: ""
	I1206 10:52:56.039291  405191 logs.go:282] 0 containers: []
	W1206 10:52:56.039298  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:56.039304  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:56.039368  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:56.074083  405191 cri.go:89] found id: ""
	I1206 10:52:56.074097  405191 logs.go:282] 0 containers: []
	W1206 10:52:56.074104  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:56.074112  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:56.074124  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:56.148294  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:56.148320  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:56.163720  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:56.163740  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:56.231608  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:56.222271   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:56.222910   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:56.224621   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:56.225337   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:56.227055   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:56.222271   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:56.222910   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:56.224621   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:56.225337   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:56.227055   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:56.231633  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:56.231644  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:56.301348  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:56.301373  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:52:58.834132  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:58.844214  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:58.844271  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:58.871604  405191 cri.go:89] found id: ""
	I1206 10:52:58.871618  405191 logs.go:282] 0 containers: []
	W1206 10:52:58.871625  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:58.871630  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:58.871689  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:58.898243  405191 cri.go:89] found id: ""
	I1206 10:52:58.898257  405191 logs.go:282] 0 containers: []
	W1206 10:52:58.898264  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:58.898269  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:58.898325  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:58.921887  405191 cri.go:89] found id: ""
	I1206 10:52:58.921901  405191 logs.go:282] 0 containers: []
	W1206 10:52:58.921907  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:58.921913  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:58.921970  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:58.947546  405191 cri.go:89] found id: ""
	I1206 10:52:58.947563  405191 logs.go:282] 0 containers: []
	W1206 10:52:58.947570  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:58.947575  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:58.947645  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:58.976915  405191 cri.go:89] found id: ""
	I1206 10:52:58.976930  405191 logs.go:282] 0 containers: []
	W1206 10:52:58.976937  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:58.976942  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:58.977005  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:59.013936  405191 cri.go:89] found id: ""
	I1206 10:52:59.013949  405191 logs.go:282] 0 containers: []
	W1206 10:52:59.013956  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:59.013962  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:59.014020  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:59.044670  405191 cri.go:89] found id: ""
	I1206 10:52:59.044683  405191 logs.go:282] 0 containers: []
	W1206 10:52:59.044690  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:59.044698  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:59.044708  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:59.111552  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:59.111571  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:59.125917  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:59.125933  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:59.190341  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:59.182165   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:59.182776   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:59.184355   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:59.184805   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:59.186371   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:59.182165   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:59.182776   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:59.184355   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:59.184805   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:59.186371   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:59.190351  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:59.190362  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:59.258936  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:59.258957  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:01.790777  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:01.802470  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:01.802534  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:01.828331  405191 cri.go:89] found id: ""
	I1206 10:53:01.828345  405191 logs.go:282] 0 containers: []
	W1206 10:53:01.828352  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:01.828357  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:01.828415  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:01.853132  405191 cri.go:89] found id: ""
	I1206 10:53:01.853145  405191 logs.go:282] 0 containers: []
	W1206 10:53:01.853153  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:01.853158  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:01.853218  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:01.879034  405191 cri.go:89] found id: ""
	I1206 10:53:01.879048  405191 logs.go:282] 0 containers: []
	W1206 10:53:01.879055  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:01.879060  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:01.879119  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:01.905079  405191 cri.go:89] found id: ""
	I1206 10:53:01.905094  405191 logs.go:282] 0 containers: []
	W1206 10:53:01.905101  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:01.905106  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:01.905168  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:01.931029  405191 cri.go:89] found id: ""
	I1206 10:53:01.931043  405191 logs.go:282] 0 containers: []
	W1206 10:53:01.931050  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:01.931055  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:01.931115  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:01.958324  405191 cri.go:89] found id: ""
	I1206 10:53:01.958338  405191 logs.go:282] 0 containers: []
	W1206 10:53:01.958345  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:01.958351  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:01.958406  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:01.999570  405191 cri.go:89] found id: ""
	I1206 10:53:01.999583  405191 logs.go:282] 0 containers: []
	W1206 10:53:01.999590  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:01.999598  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:01.999613  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:02.075754  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:02.075775  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:02.091145  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:02.091168  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:02.166018  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:02.151211   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:02.151882   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:02.153563   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:02.154149   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:02.161209   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:02.151211   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:02.151882   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:02.153563   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:02.154149   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:02.161209   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:02.166029  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:02.166041  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:02.236832  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:02.236853  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:04.769770  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:04.780155  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:04.780230  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:04.805785  405191 cri.go:89] found id: ""
	I1206 10:53:04.805799  405191 logs.go:282] 0 containers: []
	W1206 10:53:04.805806  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:04.805811  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:04.805871  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:04.833423  405191 cri.go:89] found id: ""
	I1206 10:53:04.833445  405191 logs.go:282] 0 containers: []
	W1206 10:53:04.833452  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:04.833458  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:04.833523  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:04.859864  405191 cri.go:89] found id: ""
	I1206 10:53:04.859879  405191 logs.go:282] 0 containers: []
	W1206 10:53:04.859888  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:04.859895  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:04.859964  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:04.886417  405191 cri.go:89] found id: ""
	I1206 10:53:04.886431  405191 logs.go:282] 0 containers: []
	W1206 10:53:04.886437  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:04.886443  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:04.886503  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:04.912019  405191 cri.go:89] found id: ""
	I1206 10:53:04.912033  405191 logs.go:282] 0 containers: []
	W1206 10:53:04.912040  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:04.912044  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:04.912104  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:04.941901  405191 cri.go:89] found id: ""
	I1206 10:53:04.941915  405191 logs.go:282] 0 containers: []
	W1206 10:53:04.941922  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:04.941928  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:04.941990  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:04.967316  405191 cri.go:89] found id: ""
	I1206 10:53:04.967330  405191 logs.go:282] 0 containers: []
	W1206 10:53:04.967337  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:04.967344  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:04.967356  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:05.048268  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:05.048290  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:05.064282  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:05.064299  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:05.132111  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:05.123756   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:05.124563   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:05.126211   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:05.126545   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:05.128101   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:05.123756   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:05.124563   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:05.126211   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:05.126545   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:05.128101   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:05.132131  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:05.132142  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:05.202438  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:05.202460  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:07.731737  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:07.742255  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:07.742344  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:07.767645  405191 cri.go:89] found id: ""
	I1206 10:53:07.767659  405191 logs.go:282] 0 containers: []
	W1206 10:53:07.767666  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:07.767671  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:07.767730  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:07.793951  405191 cri.go:89] found id: ""
	I1206 10:53:07.793975  405191 logs.go:282] 0 containers: []
	W1206 10:53:07.793983  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:07.793989  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:07.794055  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:07.819683  405191 cri.go:89] found id: ""
	I1206 10:53:07.819699  405191 logs.go:282] 0 containers: []
	W1206 10:53:07.819705  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:07.819711  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:07.819784  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:07.851523  405191 cri.go:89] found id: ""
	I1206 10:53:07.851537  405191 logs.go:282] 0 containers: []
	W1206 10:53:07.851543  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:07.851549  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:07.851627  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:07.878807  405191 cri.go:89] found id: ""
	I1206 10:53:07.878831  405191 logs.go:282] 0 containers: []
	W1206 10:53:07.878838  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:07.878844  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:07.878915  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:07.911047  405191 cri.go:89] found id: ""
	I1206 10:53:07.911060  405191 logs.go:282] 0 containers: []
	W1206 10:53:07.911078  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:07.911084  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:07.911155  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:07.937042  405191 cri.go:89] found id: ""
	I1206 10:53:07.937064  405191 logs.go:282] 0 containers: []
	W1206 10:53:07.937072  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:07.937080  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:07.937091  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:08.004528  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:08.004551  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:08.026930  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:08.026947  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:08.109064  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:08.100555   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:08.101020   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:08.102569   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:08.102918   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:08.104386   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:08.100555   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:08.101020   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:08.102569   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:08.102918   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:08.104386   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:08.109086  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:08.109096  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:08.177486  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:08.177508  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:10.706543  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:10.717198  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:10.717262  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:10.743532  405191 cri.go:89] found id: ""
	I1206 10:53:10.743545  405191 logs.go:282] 0 containers: []
	W1206 10:53:10.743552  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:10.743557  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:10.743617  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:10.768882  405191 cri.go:89] found id: ""
	I1206 10:53:10.768897  405191 logs.go:282] 0 containers: []
	W1206 10:53:10.768903  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:10.768908  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:10.768966  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:10.798729  405191 cri.go:89] found id: ""
	I1206 10:53:10.798742  405191 logs.go:282] 0 containers: []
	W1206 10:53:10.798751  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:10.798756  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:10.798814  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:10.823956  405191 cri.go:89] found id: ""
	I1206 10:53:10.823971  405191 logs.go:282] 0 containers: []
	W1206 10:53:10.823978  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:10.823984  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:10.824054  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:10.849242  405191 cri.go:89] found id: ""
	I1206 10:53:10.849271  405191 logs.go:282] 0 containers: []
	W1206 10:53:10.849278  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:10.849283  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:10.849351  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:10.876058  405191 cri.go:89] found id: ""
	I1206 10:53:10.876071  405191 logs.go:282] 0 containers: []
	W1206 10:53:10.876078  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:10.876086  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:10.876145  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:10.901170  405191 cri.go:89] found id: ""
	I1206 10:53:10.901184  405191 logs.go:282] 0 containers: []
	W1206 10:53:10.901192  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:10.901199  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:10.901210  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:10.971362  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:10.971388  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:11.005981  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:11.006000  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:11.089894  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:11.089916  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:11.106328  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:11.106365  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:11.174633  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:11.166045   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:11.167001   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:11.168645   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:11.169014   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:11.170539   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:11.166045   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:11.167001   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:11.168645   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:11.169014   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:11.170539   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:13.674898  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:13.689619  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:13.689793  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:13.724853  405191 cri.go:89] found id: ""
	I1206 10:53:13.724867  405191 logs.go:282] 0 containers: []
	W1206 10:53:13.724874  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:13.724880  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:13.724939  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:13.751349  405191 cri.go:89] found id: ""
	I1206 10:53:13.751363  405191 logs.go:282] 0 containers: []
	W1206 10:53:13.751369  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:13.751402  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:13.751488  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:13.778380  405191 cri.go:89] found id: ""
	I1206 10:53:13.778395  405191 logs.go:282] 0 containers: []
	W1206 10:53:13.778402  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:13.778408  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:13.778474  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:13.806068  405191 cri.go:89] found id: ""
	I1206 10:53:13.806081  405191 logs.go:282] 0 containers: []
	W1206 10:53:13.806088  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:13.806093  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:13.806150  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:13.831347  405191 cri.go:89] found id: ""
	I1206 10:53:13.831360  405191 logs.go:282] 0 containers: []
	W1206 10:53:13.831367  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:13.831410  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:13.831494  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:13.856962  405191 cri.go:89] found id: ""
	I1206 10:53:13.856976  405191 logs.go:282] 0 containers: []
	W1206 10:53:13.856983  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:13.856994  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:13.857057  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:13.883227  405191 cri.go:89] found id: ""
	I1206 10:53:13.883241  405191 logs.go:282] 0 containers: []
	W1206 10:53:13.883248  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:13.883256  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:13.883268  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:13.912731  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:13.912749  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:13.981562  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:13.981581  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:13.997805  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:13.997822  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:14.076333  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:14.066553   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:14.067750   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:14.068525   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:14.070431   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:14.071166   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:14.066553   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:14.067750   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:14.068525   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:14.070431   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:14.071166   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:14.076343  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:14.076355  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:16.646007  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:16.656726  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:16.656822  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:16.682515  405191 cri.go:89] found id: ""
	I1206 10:53:16.682529  405191 logs.go:282] 0 containers: []
	W1206 10:53:16.682535  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:16.682541  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:16.682609  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:16.708327  405191 cri.go:89] found id: ""
	I1206 10:53:16.708341  405191 logs.go:282] 0 containers: []
	W1206 10:53:16.708359  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:16.708365  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:16.708433  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:16.744002  405191 cri.go:89] found id: ""
	I1206 10:53:16.744023  405191 logs.go:282] 0 containers: []
	W1206 10:53:16.744032  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:16.744037  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:16.744099  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:16.771487  405191 cri.go:89] found id: ""
	I1206 10:53:16.771501  405191 logs.go:282] 0 containers: []
	W1206 10:53:16.771509  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:16.771514  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:16.771594  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:16.799494  405191 cri.go:89] found id: ""
	I1206 10:53:16.799507  405191 logs.go:282] 0 containers: []
	W1206 10:53:16.799514  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:16.799520  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:16.799595  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:16.825114  405191 cri.go:89] found id: ""
	I1206 10:53:16.825128  405191 logs.go:282] 0 containers: []
	W1206 10:53:16.825135  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:16.825141  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:16.825204  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:16.851277  405191 cri.go:89] found id: ""
	I1206 10:53:16.851304  405191 logs.go:282] 0 containers: []
	W1206 10:53:16.851312  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:16.851319  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:16.851329  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:16.880918  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:16.880935  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:16.946617  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:16.946636  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:16.961739  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:16.961756  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:17.047880  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:17.038809   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:17.039588   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:17.041249   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:17.041748   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:17.043299   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:17.038809   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:17.039588   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:17.041249   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:17.041748   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:17.043299   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:17.047890  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:17.047901  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:19.616855  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:19.627228  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:19.627288  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:19.654067  405191 cri.go:89] found id: ""
	I1206 10:53:19.654081  405191 logs.go:282] 0 containers: []
	W1206 10:53:19.654088  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:19.654093  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:19.654166  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:19.679488  405191 cri.go:89] found id: ""
	I1206 10:53:19.679502  405191 logs.go:282] 0 containers: []
	W1206 10:53:19.679509  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:19.679515  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:19.679573  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:19.706620  405191 cri.go:89] found id: ""
	I1206 10:53:19.706635  405191 logs.go:282] 0 containers: []
	W1206 10:53:19.706642  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:19.706647  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:19.706706  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:19.734381  405191 cri.go:89] found id: ""
	I1206 10:53:19.734395  405191 logs.go:282] 0 containers: []
	W1206 10:53:19.734406  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:19.734412  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:19.734476  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:19.761415  405191 cri.go:89] found id: ""
	I1206 10:53:19.761429  405191 logs.go:282] 0 containers: []
	W1206 10:53:19.761436  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:19.761441  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:19.761502  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:19.787176  405191 cri.go:89] found id: ""
	I1206 10:53:19.787190  405191 logs.go:282] 0 containers: []
	W1206 10:53:19.787203  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:19.787209  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:19.787270  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:19.813067  405191 cri.go:89] found id: ""
	I1206 10:53:19.813081  405191 logs.go:282] 0 containers: []
	W1206 10:53:19.813088  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:19.813096  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:19.813105  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:19.878821  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:19.878840  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:19.894664  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:19.894680  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:19.965061  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:19.956101   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:19.957218   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:19.958973   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:19.959413   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:19.960938   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:19.956101   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:19.957218   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:19.958973   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:19.959413   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:19.960938   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:19.965101  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:19.965111  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:20.038434  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:20.038456  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:22.572942  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:22.583202  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:22.583273  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:22.608534  405191 cri.go:89] found id: ""
	I1206 10:53:22.608548  405191 logs.go:282] 0 containers: []
	W1206 10:53:22.608556  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:22.608561  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:22.608623  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:22.637655  405191 cri.go:89] found id: ""
	I1206 10:53:22.637673  405191 logs.go:282] 0 containers: []
	W1206 10:53:22.637680  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:22.637685  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:22.637748  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:22.666908  405191 cri.go:89] found id: ""
	I1206 10:53:22.666922  405191 logs.go:282] 0 containers: []
	W1206 10:53:22.666929  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:22.666935  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:22.666995  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:22.694611  405191 cri.go:89] found id: ""
	I1206 10:53:22.694625  405191 logs.go:282] 0 containers: []
	W1206 10:53:22.694633  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:22.694638  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:22.694705  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:22.720468  405191 cri.go:89] found id: ""
	I1206 10:53:22.720482  405191 logs.go:282] 0 containers: []
	W1206 10:53:22.720489  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:22.720494  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:22.720551  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:22.750061  405191 cri.go:89] found id: ""
	I1206 10:53:22.750075  405191 logs.go:282] 0 containers: []
	W1206 10:53:22.750082  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:22.750087  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:22.750148  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:22.778201  405191 cri.go:89] found id: ""
	I1206 10:53:22.778216  405191 logs.go:282] 0 containers: []
	W1206 10:53:22.778223  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:22.778230  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:22.778241  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:22.848689  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:22.848710  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:22.878893  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:22.878908  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:22.945043  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:22.945065  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:22.960966  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:22.960982  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:23.041735  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:23.033031   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:23.033838   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:23.035561   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:23.036147   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:23.037681   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:23.033031   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:23.033838   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:23.035561   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:23.036147   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:23.037681   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:25.543429  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:25.553845  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:25.553906  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:25.580411  405191 cri.go:89] found id: ""
	I1206 10:53:25.580427  405191 logs.go:282] 0 containers: []
	W1206 10:53:25.580434  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:25.580439  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:25.580498  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:25.610347  405191 cri.go:89] found id: ""
	I1206 10:53:25.610361  405191 logs.go:282] 0 containers: []
	W1206 10:53:25.610368  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:25.610373  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:25.610430  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:25.637376  405191 cri.go:89] found id: ""
	I1206 10:53:25.637390  405191 logs.go:282] 0 containers: []
	W1206 10:53:25.637398  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:25.637403  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:25.637463  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:25.666544  405191 cri.go:89] found id: ""
	I1206 10:53:25.666558  405191 logs.go:282] 0 containers: []
	W1206 10:53:25.666572  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:25.666577  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:25.666636  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:25.692777  405191 cri.go:89] found id: ""
	I1206 10:53:25.692791  405191 logs.go:282] 0 containers: []
	W1206 10:53:25.692798  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:25.692803  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:25.692865  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:25.721819  405191 cri.go:89] found id: ""
	I1206 10:53:25.721833  405191 logs.go:282] 0 containers: []
	W1206 10:53:25.721841  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:25.721845  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:25.721901  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:25.749420  405191 cri.go:89] found id: ""
	I1206 10:53:25.749435  405191 logs.go:282] 0 containers: []
	W1206 10:53:25.749442  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:25.749450  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:25.749461  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:25.817956  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:25.817979  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:25.847454  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:25.847480  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:25.913445  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:25.913464  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:25.928310  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:25.928326  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:26.010257  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:25.998143   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:25.999802   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:26.001851   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:26.002260   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:26.005429   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:25.998143   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:25.999802   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:26.001851   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:26.002260   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:26.005429   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:28.510540  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:28.521536  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:28.521597  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:28.549848  405191 cri.go:89] found id: ""
	I1206 10:53:28.549862  405191 logs.go:282] 0 containers: []
	W1206 10:53:28.549869  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:28.549880  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:28.549941  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:28.574916  405191 cri.go:89] found id: ""
	I1206 10:53:28.574929  405191 logs.go:282] 0 containers: []
	W1206 10:53:28.574937  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:28.574941  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:28.575001  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:28.603948  405191 cri.go:89] found id: ""
	I1206 10:53:28.603963  405191 logs.go:282] 0 containers: []
	W1206 10:53:28.603971  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:28.603976  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:28.604038  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:28.633100  405191 cri.go:89] found id: ""
	I1206 10:53:28.633114  405191 logs.go:282] 0 containers: []
	W1206 10:53:28.633121  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:28.633127  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:28.633186  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:28.658360  405191 cri.go:89] found id: ""
	I1206 10:53:28.658374  405191 logs.go:282] 0 containers: []
	W1206 10:53:28.658381  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:28.658386  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:28.658450  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:28.686919  405191 cri.go:89] found id: ""
	I1206 10:53:28.686933  405191 logs.go:282] 0 containers: []
	W1206 10:53:28.686949  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:28.686955  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:28.687012  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:28.713970  405191 cri.go:89] found id: ""
	I1206 10:53:28.713984  405191 logs.go:282] 0 containers: []
	W1206 10:53:28.713991  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:28.714001  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:28.714011  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:28.783354  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:28.783415  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:28.799765  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:28.799785  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:28.875190  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:28.865163   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:28.865941   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:28.867993   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:28.868552   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:28.870594   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:28.865163   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:28.865941   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:28.867993   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:28.868552   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:28.870594   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:28.875200  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:28.875211  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:28.947238  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:28.947258  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:31.487136  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:31.497608  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:31.497670  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:31.524319  405191 cri.go:89] found id: ""
	I1206 10:53:31.524333  405191 logs.go:282] 0 containers: []
	W1206 10:53:31.524341  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:31.524347  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:31.524409  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:31.550830  405191 cri.go:89] found id: ""
	I1206 10:53:31.550845  405191 logs.go:282] 0 containers: []
	W1206 10:53:31.550852  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:31.550857  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:31.550925  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:31.577502  405191 cri.go:89] found id: ""
	I1206 10:53:31.577516  405191 logs.go:282] 0 containers: []
	W1206 10:53:31.577523  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:31.577528  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:31.577587  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:31.604074  405191 cri.go:89] found id: ""
	I1206 10:53:31.604088  405191 logs.go:282] 0 containers: []
	W1206 10:53:31.604095  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:31.604100  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:31.604157  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:31.630962  405191 cri.go:89] found id: ""
	I1206 10:53:31.630976  405191 logs.go:282] 0 containers: []
	W1206 10:53:31.630984  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:31.630989  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:31.631053  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:31.656604  405191 cri.go:89] found id: ""
	I1206 10:53:31.656619  405191 logs.go:282] 0 containers: []
	W1206 10:53:31.656626  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:31.656632  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:31.656695  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:31.682731  405191 cri.go:89] found id: ""
	I1206 10:53:31.682745  405191 logs.go:282] 0 containers: []
	W1206 10:53:31.682752  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:31.682760  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:31.682771  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:31.715043  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:31.715059  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:31.780742  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:31.780762  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:31.795393  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:31.795410  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:31.863799  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:31.855344   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:31.856015   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:31.857739   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:31.858189   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:31.859847   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:31.855344   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:31.856015   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:31.857739   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:31.858189   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:31.859847   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:31.863809  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:31.863820  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:34.432706  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:34.442775  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:34.442837  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:34.468444  405191 cri.go:89] found id: ""
	I1206 10:53:34.468458  405191 logs.go:282] 0 containers: []
	W1206 10:53:34.468465  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:34.468471  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:34.468536  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:34.494328  405191 cri.go:89] found id: ""
	I1206 10:53:34.494343  405191 logs.go:282] 0 containers: []
	W1206 10:53:34.494350  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:34.494356  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:34.494418  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:34.527045  405191 cri.go:89] found id: ""
	I1206 10:53:34.527060  405191 logs.go:282] 0 containers: []
	W1206 10:53:34.527068  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:34.527076  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:34.527139  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:34.554315  405191 cri.go:89] found id: ""
	I1206 10:53:34.554328  405191 logs.go:282] 0 containers: []
	W1206 10:53:34.554335  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:34.554340  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:34.554408  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:34.579994  405191 cri.go:89] found id: ""
	I1206 10:53:34.580009  405191 logs.go:282] 0 containers: []
	W1206 10:53:34.580024  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:34.580030  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:34.580093  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:34.608896  405191 cri.go:89] found id: ""
	I1206 10:53:34.608910  405191 logs.go:282] 0 containers: []
	W1206 10:53:34.608917  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:34.608925  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:34.608983  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:34.638506  405191 cri.go:89] found id: ""
	I1206 10:53:34.638521  405191 logs.go:282] 0 containers: []
	W1206 10:53:34.638528  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:34.638536  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:34.638549  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:34.700281  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:34.691941   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:34.692724   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:34.693733   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:34.694312   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:34.696014   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:34.691941   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:34.692724   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:34.693733   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:34.694312   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:34.696014   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:34.700290  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:34.700302  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:34.773019  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:34.773040  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:34.803610  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:34.803628  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:34.870473  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:34.870498  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:37.386935  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:37.397529  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:37.397618  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:37.422529  405191 cri.go:89] found id: ""
	I1206 10:53:37.422543  405191 logs.go:282] 0 containers: []
	W1206 10:53:37.422550  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:37.422556  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:37.422613  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:37.447810  405191 cri.go:89] found id: ""
	I1206 10:53:37.447824  405191 logs.go:282] 0 containers: []
	W1206 10:53:37.447830  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:37.447836  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:37.447895  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:37.473775  405191 cri.go:89] found id: ""
	I1206 10:53:37.473794  405191 logs.go:282] 0 containers: []
	W1206 10:53:37.473801  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:37.473806  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:37.473862  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:37.499349  405191 cri.go:89] found id: ""
	I1206 10:53:37.499362  405191 logs.go:282] 0 containers: []
	W1206 10:53:37.499370  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:37.499400  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:37.499468  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:37.526194  405191 cri.go:89] found id: ""
	I1206 10:53:37.526208  405191 logs.go:282] 0 containers: []
	W1206 10:53:37.526216  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:37.526221  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:37.526286  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:37.552021  405191 cri.go:89] found id: ""
	I1206 10:53:37.552041  405191 logs.go:282] 0 containers: []
	W1206 10:53:37.552049  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:37.552054  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:37.552113  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:37.577455  405191 cri.go:89] found id: ""
	I1206 10:53:37.577469  405191 logs.go:282] 0 containers: []
	W1206 10:53:37.577476  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:37.577484  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:37.577495  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:37.605307  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:37.605324  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:37.674813  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:37.674836  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:37.689252  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:37.689268  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:37.751707  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:37.743090   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:37.743746   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:37.745542   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:37.746167   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:37.747937   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:37.743090   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:37.743746   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:37.745542   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:37.746167   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:37.747937   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:37.751719  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:37.751730  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:40.320654  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:40.331310  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:40.331372  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:40.357691  405191 cri.go:89] found id: ""
	I1206 10:53:40.357706  405191 logs.go:282] 0 containers: []
	W1206 10:53:40.357721  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:40.357726  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:40.357789  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:40.383818  405191 cri.go:89] found id: ""
	I1206 10:53:40.383833  405191 logs.go:282] 0 containers: []
	W1206 10:53:40.383841  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:40.383847  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:40.383904  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:40.412121  405191 cri.go:89] found id: ""
	I1206 10:53:40.412134  405191 logs.go:282] 0 containers: []
	W1206 10:53:40.412141  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:40.412146  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:40.412204  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:40.438527  405191 cri.go:89] found id: ""
	I1206 10:53:40.438542  405191 logs.go:282] 0 containers: []
	W1206 10:53:40.438549  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:40.438554  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:40.438616  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:40.465329  405191 cri.go:89] found id: ""
	I1206 10:53:40.465344  405191 logs.go:282] 0 containers: []
	W1206 10:53:40.465351  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:40.465356  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:40.465420  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:40.491939  405191 cri.go:89] found id: ""
	I1206 10:53:40.491952  405191 logs.go:282] 0 containers: []
	W1206 10:53:40.491960  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:40.491965  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:40.492029  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:40.516801  405191 cri.go:89] found id: ""
	I1206 10:53:40.516821  405191 logs.go:282] 0 containers: []
	W1206 10:53:40.516828  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:40.516836  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:40.516848  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:40.593042  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:40.593062  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:40.608966  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:40.608986  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:40.675818  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:40.665869   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:40.667834   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:40.668210   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:40.669803   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:40.670394   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:40.665869   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:40.667834   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:40.668210   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:40.669803   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:40.670394   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:40.675828  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:40.675841  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:40.744680  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:40.744702  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:43.275550  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:43.285722  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:43.285783  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:43.312235  405191 cri.go:89] found id: ""
	I1206 10:53:43.312249  405191 logs.go:282] 0 containers: []
	W1206 10:53:43.312262  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:43.312278  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:43.312337  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:43.338204  405191 cri.go:89] found id: ""
	I1206 10:53:43.338219  405191 logs.go:282] 0 containers: []
	W1206 10:53:43.338226  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:43.338249  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:43.338321  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:43.363434  405191 cri.go:89] found id: ""
	I1206 10:53:43.363455  405191 logs.go:282] 0 containers: []
	W1206 10:53:43.363463  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:43.363480  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:43.363562  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:43.390724  405191 cri.go:89] found id: ""
	I1206 10:53:43.390738  405191 logs.go:282] 0 containers: []
	W1206 10:53:43.390745  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:43.390750  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:43.390824  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:43.416427  405191 cri.go:89] found id: ""
	I1206 10:53:43.416442  405191 logs.go:282] 0 containers: []
	W1206 10:53:43.416449  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:43.416454  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:43.416511  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:43.446598  405191 cri.go:89] found id: ""
	I1206 10:53:43.446612  405191 logs.go:282] 0 containers: []
	W1206 10:53:43.446619  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:43.446625  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:43.446695  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:43.472759  405191 cri.go:89] found id: ""
	I1206 10:53:43.472773  405191 logs.go:282] 0 containers: []
	W1206 10:53:43.472779  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:43.472787  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:43.472797  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:43.538686  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:43.538706  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:43.553731  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:43.553746  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:43.618535  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:43.609715   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:43.610485   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:43.612195   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:43.612812   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:43.614536   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:43.609715   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:43.610485   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:43.612195   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:43.612812   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:43.614536   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:43.618556  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:43.618570  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:43.690132  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:43.690152  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:46.225047  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:46.236105  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:46.236179  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:46.269036  405191 cri.go:89] found id: ""
	I1206 10:53:46.269066  405191 logs.go:282] 0 containers: []
	W1206 10:53:46.269074  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:46.269079  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:46.269151  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:46.300616  405191 cri.go:89] found id: ""
	I1206 10:53:46.300631  405191 logs.go:282] 0 containers: []
	W1206 10:53:46.300639  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:46.300645  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:46.300707  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:46.330077  405191 cri.go:89] found id: ""
	I1206 10:53:46.330102  405191 logs.go:282] 0 containers: []
	W1206 10:53:46.330110  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:46.330115  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:46.330189  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:46.361893  405191 cri.go:89] found id: ""
	I1206 10:53:46.361908  405191 logs.go:282] 0 containers: []
	W1206 10:53:46.361915  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:46.361920  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:46.361991  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:46.387920  405191 cri.go:89] found id: ""
	I1206 10:53:46.387934  405191 logs.go:282] 0 containers: []
	W1206 10:53:46.387941  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:46.387947  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:46.388006  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:46.415440  405191 cri.go:89] found id: ""
	I1206 10:53:46.415463  405191 logs.go:282] 0 containers: []
	W1206 10:53:46.415470  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:46.415475  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:46.415534  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:46.442198  405191 cri.go:89] found id: ""
	I1206 10:53:46.442211  405191 logs.go:282] 0 containers: []
	W1206 10:53:46.442219  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:46.442226  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:46.442239  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:46.457274  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:46.457290  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:46.520346  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:46.512290   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:46.512824   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:46.514476   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:46.514946   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:46.516438   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:46.512290   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:46.512824   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:46.514476   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:46.514946   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:46.516438   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:46.520388  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:46.520399  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:46.595642  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:46.595673  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:46.626749  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:46.626769  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:49.193445  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:49.203743  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:49.203807  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:49.233557  405191 cri.go:89] found id: ""
	I1206 10:53:49.233571  405191 logs.go:282] 0 containers: []
	W1206 10:53:49.233578  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:49.233583  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:49.233643  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:49.265569  405191 cri.go:89] found id: ""
	I1206 10:53:49.265583  405191 logs.go:282] 0 containers: []
	W1206 10:53:49.265590  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:49.265595  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:49.265651  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:49.296146  405191 cri.go:89] found id: ""
	I1206 10:53:49.296159  405191 logs.go:282] 0 containers: []
	W1206 10:53:49.296166  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:49.296172  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:49.296232  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:49.321471  405191 cri.go:89] found id: ""
	I1206 10:53:49.321485  405191 logs.go:282] 0 containers: []
	W1206 10:53:49.321492  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:49.321498  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:49.321556  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:49.346537  405191 cri.go:89] found id: ""
	I1206 10:53:49.346551  405191 logs.go:282] 0 containers: []
	W1206 10:53:49.346571  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:49.346577  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:49.346693  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:49.372292  405191 cri.go:89] found id: ""
	I1206 10:53:49.372307  405191 logs.go:282] 0 containers: []
	W1206 10:53:49.372314  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:49.372320  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:49.372382  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:49.397395  405191 cri.go:89] found id: ""
	I1206 10:53:49.397408  405191 logs.go:282] 0 containers: []
	W1206 10:53:49.397415  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:49.397422  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:49.397432  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:49.464359  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:49.464378  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:49.479746  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:49.479762  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:49.542949  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:49.534167   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:49.534752   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:49.536598   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:49.537091   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:49.538580   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:49.534167   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:49.534752   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:49.536598   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:49.537091   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:49.538580   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:49.542959  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:49.542969  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:49.612749  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:49.612769  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:52.142276  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:52.152804  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:52.152867  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:52.179560  405191 cri.go:89] found id: ""
	I1206 10:53:52.179575  405191 logs.go:282] 0 containers: []
	W1206 10:53:52.179582  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:52.179587  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:52.179642  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:52.204827  405191 cri.go:89] found id: ""
	I1206 10:53:52.204842  405191 logs.go:282] 0 containers: []
	W1206 10:53:52.204849  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:52.204854  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:52.204917  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:52.250790  405191 cri.go:89] found id: ""
	I1206 10:53:52.250804  405191 logs.go:282] 0 containers: []
	W1206 10:53:52.250811  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:52.250816  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:52.250886  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:52.282140  405191 cri.go:89] found id: ""
	I1206 10:53:52.282153  405191 logs.go:282] 0 containers: []
	W1206 10:53:52.282161  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:52.282166  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:52.282225  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:52.314373  405191 cri.go:89] found id: ""
	I1206 10:53:52.314387  405191 logs.go:282] 0 containers: []
	W1206 10:53:52.314395  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:52.314400  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:52.314471  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:52.339037  405191 cri.go:89] found id: ""
	I1206 10:53:52.339051  405191 logs.go:282] 0 containers: []
	W1206 10:53:52.339058  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:52.339064  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:52.339124  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:52.366113  405191 cri.go:89] found id: ""
	I1206 10:53:52.366127  405191 logs.go:282] 0 containers: []
	W1206 10:53:52.366134  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:52.366142  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:52.366152  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:52.436368  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:52.436388  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:52.451468  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:52.451487  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:52.518739  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:52.509542   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:52.509966   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:52.511603   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:52.511955   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:52.513754   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:52.509542   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:52.509966   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:52.511603   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:52.511955   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:52.513754   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:52.518760  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:52.518777  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:52.593784  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:52.593805  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:55.124735  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:55.135510  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:55.135574  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:55.162613  405191 cri.go:89] found id: ""
	I1206 10:53:55.162626  405191 logs.go:282] 0 containers: []
	W1206 10:53:55.162633  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:55.162638  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:55.162703  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:55.189655  405191 cri.go:89] found id: ""
	I1206 10:53:55.189669  405191 logs.go:282] 0 containers: []
	W1206 10:53:55.189676  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:55.189682  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:55.189786  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:55.215289  405191 cri.go:89] found id: ""
	I1206 10:53:55.215303  405191 logs.go:282] 0 containers: []
	W1206 10:53:55.215310  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:55.215315  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:55.215402  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:55.247890  405191 cri.go:89] found id: ""
	I1206 10:53:55.247913  405191 logs.go:282] 0 containers: []
	W1206 10:53:55.247921  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:55.247926  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:55.247992  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:55.283368  405191 cri.go:89] found id: ""
	I1206 10:53:55.283409  405191 logs.go:282] 0 containers: []
	W1206 10:53:55.283416  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:55.283422  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:55.283516  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:55.310596  405191 cri.go:89] found id: ""
	I1206 10:53:55.310609  405191 logs.go:282] 0 containers: []
	W1206 10:53:55.310627  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:55.310632  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:55.310712  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:55.337361  405191 cri.go:89] found id: ""
	I1206 10:53:55.337374  405191 logs.go:282] 0 containers: []
	W1206 10:53:55.337381  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:55.337389  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:55.337399  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:55.404341  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:55.404361  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:55.419687  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:55.419705  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:55.485498  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:55.476614   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:55.477821   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:55.478840   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:55.479810   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:55.480435   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:55.476614   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:55.477821   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:55.478840   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:55.479810   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:55.480435   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:55.485509  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:55.485522  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:55.555911  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:55.555932  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:58.088179  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:58.099010  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:58.099069  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:58.124686  405191 cri.go:89] found id: ""
	I1206 10:53:58.124700  405191 logs.go:282] 0 containers: []
	W1206 10:53:58.124710  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:58.124716  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:58.124773  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:58.149717  405191 cri.go:89] found id: ""
	I1206 10:53:58.149730  405191 logs.go:282] 0 containers: []
	W1206 10:53:58.149738  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:58.149743  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:58.149800  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:58.177293  405191 cri.go:89] found id: ""
	I1206 10:53:58.177307  405191 logs.go:282] 0 containers: []
	W1206 10:53:58.177314  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:58.177319  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:58.177389  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:58.203540  405191 cri.go:89] found id: ""
	I1206 10:53:58.203554  405191 logs.go:282] 0 containers: []
	W1206 10:53:58.203562  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:58.203567  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:58.203632  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:58.237354  405191 cri.go:89] found id: ""
	I1206 10:53:58.237377  405191 logs.go:282] 0 containers: []
	W1206 10:53:58.237385  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:58.237390  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:58.237459  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:58.269725  405191 cri.go:89] found id: ""
	I1206 10:53:58.269739  405191 logs.go:282] 0 containers: []
	W1206 10:53:58.269746  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:58.269751  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:58.269821  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:58.297406  405191 cri.go:89] found id: ""
	I1206 10:53:58.297420  405191 logs.go:282] 0 containers: []
	W1206 10:53:58.297427  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:58.297435  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:58.297445  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:58.363296  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:58.363319  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:58.379154  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:58.379170  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:58.448306  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:58.438857   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:58.439654   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:58.441442   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:58.441790   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:58.443511   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:58.438857   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:58.439654   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:58.441442   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:58.441790   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:58.443511   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:58.448317  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:58.448331  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:58.518384  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:58.518408  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:01.052183  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:01.062404  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:01.062462  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:01.087509  405191 cri.go:89] found id: ""
	I1206 10:54:01.087523  405191 logs.go:282] 0 containers: []
	W1206 10:54:01.087530  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:01.087536  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:01.087598  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:01.113371  405191 cri.go:89] found id: ""
	I1206 10:54:01.113385  405191 logs.go:282] 0 containers: []
	W1206 10:54:01.113392  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:01.113397  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:01.113456  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:01.140194  405191 cri.go:89] found id: ""
	I1206 10:54:01.140208  405191 logs.go:282] 0 containers: []
	W1206 10:54:01.140214  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:01.140220  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:01.140282  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:01.166431  405191 cri.go:89] found id: ""
	I1206 10:54:01.166445  405191 logs.go:282] 0 containers: []
	W1206 10:54:01.166452  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:01.166460  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:01.166523  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:01.195742  405191 cri.go:89] found id: ""
	I1206 10:54:01.195756  405191 logs.go:282] 0 containers: []
	W1206 10:54:01.195764  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:01.195769  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:01.195835  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:01.228731  405191 cri.go:89] found id: ""
	I1206 10:54:01.228746  405191 logs.go:282] 0 containers: []
	W1206 10:54:01.228753  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:01.228759  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:01.228821  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:01.260175  405191 cri.go:89] found id: ""
	I1206 10:54:01.260189  405191 logs.go:282] 0 containers: []
	W1206 10:54:01.260196  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:01.260204  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:01.260214  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:01.337819  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:01.337839  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:01.353486  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:01.353502  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:01.423278  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:01.414904   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:01.415292   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:01.417033   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:01.417517   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:01.418780   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:01.414904   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:01.415292   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:01.417033   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:01.417517   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:01.418780   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:01.423288  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:01.423299  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:01.492536  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:01.492556  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:04.028526  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:04.039535  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:04.039600  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:04.069150  405191 cri.go:89] found id: ""
	I1206 10:54:04.069164  405191 logs.go:282] 0 containers: []
	W1206 10:54:04.069172  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:04.069177  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:04.069238  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:04.100343  405191 cri.go:89] found id: ""
	I1206 10:54:04.100357  405191 logs.go:282] 0 containers: []
	W1206 10:54:04.100364  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:04.100369  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:04.100431  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:04.127347  405191 cri.go:89] found id: ""
	I1206 10:54:04.127361  405191 logs.go:282] 0 containers: []
	W1206 10:54:04.127368  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:04.127395  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:04.127466  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:04.154542  405191 cri.go:89] found id: ""
	I1206 10:54:04.154557  405191 logs.go:282] 0 containers: []
	W1206 10:54:04.154564  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:04.154569  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:04.154628  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:04.181647  405191 cri.go:89] found id: ""
	I1206 10:54:04.181661  405191 logs.go:282] 0 containers: []
	W1206 10:54:04.181668  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:04.181676  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:04.181739  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:04.210872  405191 cri.go:89] found id: ""
	I1206 10:54:04.210886  405191 logs.go:282] 0 containers: []
	W1206 10:54:04.210893  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:04.210899  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:04.210962  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:04.246454  405191 cri.go:89] found id: ""
	I1206 10:54:04.246468  405191 logs.go:282] 0 containers: []
	W1206 10:54:04.246482  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:04.246490  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:04.246501  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:04.322848  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:04.322872  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:04.338928  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:04.338945  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:04.409905  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:04.400164   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:04.400961   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:04.402781   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:04.403461   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:04.404662   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:04.400164   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:04.400961   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:04.402781   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:04.403461   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:04.404662   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:04.409916  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:04.409928  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:04.480369  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:04.480389  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:07.012345  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:07.022891  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:07.022962  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:07.049835  405191 cri.go:89] found id: ""
	I1206 10:54:07.049849  405191 logs.go:282] 0 containers: []
	W1206 10:54:07.049856  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:07.049861  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:07.049925  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:07.076617  405191 cri.go:89] found id: ""
	I1206 10:54:07.076631  405191 logs.go:282] 0 containers: []
	W1206 10:54:07.076637  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:07.076643  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:07.076704  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:07.103202  405191 cri.go:89] found id: ""
	I1206 10:54:07.103216  405191 logs.go:282] 0 containers: []
	W1206 10:54:07.103223  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:07.103229  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:07.103288  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:07.129964  405191 cri.go:89] found id: ""
	I1206 10:54:07.129977  405191 logs.go:282] 0 containers: []
	W1206 10:54:07.129984  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:07.129989  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:07.130048  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:07.157459  405191 cri.go:89] found id: ""
	I1206 10:54:07.157473  405191 logs.go:282] 0 containers: []
	W1206 10:54:07.157480  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:07.157485  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:07.157551  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:07.183797  405191 cri.go:89] found id: ""
	I1206 10:54:07.183811  405191 logs.go:282] 0 containers: []
	W1206 10:54:07.183818  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:07.183823  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:07.183881  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:07.209675  405191 cri.go:89] found id: ""
	I1206 10:54:07.209689  405191 logs.go:282] 0 containers: []
	W1206 10:54:07.209697  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:07.209704  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:07.209715  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:07.228202  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:07.228225  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:07.312770  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:07.304201   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:07.304672   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:07.306492   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:07.307083   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:07.308768   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:07.304201   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:07.304672   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:07.306492   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:07.307083   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:07.308768   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:07.312782  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:07.312792  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:07.383254  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:07.383275  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:07.414045  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:07.414060  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:09.985551  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:09.995745  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:09.995806  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:10.030868  405191 cri.go:89] found id: ""
	I1206 10:54:10.030884  405191 logs.go:282] 0 containers: []
	W1206 10:54:10.030892  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:10.030898  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:10.030967  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:10.060505  405191 cri.go:89] found id: ""
	I1206 10:54:10.060520  405191 logs.go:282] 0 containers: []
	W1206 10:54:10.060527  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:10.060532  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:10.060596  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:10.087945  405191 cri.go:89] found id: ""
	I1206 10:54:10.087979  405191 logs.go:282] 0 containers: []
	W1206 10:54:10.087986  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:10.087992  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:10.088069  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:10.116434  405191 cri.go:89] found id: ""
	I1206 10:54:10.116448  405191 logs.go:282] 0 containers: []
	W1206 10:54:10.116455  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:10.116461  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:10.116523  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:10.144560  405191 cri.go:89] found id: ""
	I1206 10:54:10.144572  405191 logs.go:282] 0 containers: []
	W1206 10:54:10.144579  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:10.144584  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:10.144645  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:10.173019  405191 cri.go:89] found id: ""
	I1206 10:54:10.173033  405191 logs.go:282] 0 containers: []
	W1206 10:54:10.173040  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:10.173046  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:10.173105  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:10.200809  405191 cri.go:89] found id: ""
	I1206 10:54:10.200823  405191 logs.go:282] 0 containers: []
	W1206 10:54:10.200830  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:10.200837  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:10.200847  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:10.215623  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:10.215642  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:10.300302  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:10.291573   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:10.292121   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:10.293856   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:10.294451   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:10.295989   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:10.291573   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:10.292121   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:10.293856   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:10.294451   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:10.295989   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:10.300314  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:10.300325  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:10.369603  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:10.369624  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:10.402671  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:10.402687  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:12.968162  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:12.978411  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:12.978473  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:13.010579  405191 cri.go:89] found id: ""
	I1206 10:54:13.010593  405191 logs.go:282] 0 containers: []
	W1206 10:54:13.010601  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:13.010606  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:13.010669  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:13.037103  405191 cri.go:89] found id: ""
	I1206 10:54:13.037118  405191 logs.go:282] 0 containers: []
	W1206 10:54:13.037125  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:13.037131  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:13.037199  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:13.063109  405191 cri.go:89] found id: ""
	I1206 10:54:13.063124  405191 logs.go:282] 0 containers: []
	W1206 10:54:13.063131  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:13.063136  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:13.063195  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:13.088780  405191 cri.go:89] found id: ""
	I1206 10:54:13.088794  405191 logs.go:282] 0 containers: []
	W1206 10:54:13.088801  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:13.088806  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:13.088868  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:13.114682  405191 cri.go:89] found id: ""
	I1206 10:54:13.114696  405191 logs.go:282] 0 containers: []
	W1206 10:54:13.114703  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:13.114708  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:13.114952  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:13.141850  405191 cri.go:89] found id: ""
	I1206 10:54:13.141866  405191 logs.go:282] 0 containers: []
	W1206 10:54:13.141873  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:13.141880  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:13.141945  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:13.167958  405191 cri.go:89] found id: ""
	I1206 10:54:13.167975  405191 logs.go:282] 0 containers: []
	W1206 10:54:13.167982  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:13.167990  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:13.168002  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:13.237314  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:13.237335  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:13.254137  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:13.254164  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:13.322226  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:13.313022   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:13.313640   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:13.315145   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:13.315780   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:13.317512   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:13.313022   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:13.313640   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:13.315145   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:13.315780   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:13.317512   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:13.322237  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:13.322248  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:13.394938  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:13.394958  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:15.923162  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:15.933287  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:15.933346  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:15.958679  405191 cri.go:89] found id: ""
	I1206 10:54:15.958694  405191 logs.go:282] 0 containers: []
	W1206 10:54:15.958701  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:15.958706  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:15.958768  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:15.986252  405191 cri.go:89] found id: ""
	I1206 10:54:15.986267  405191 logs.go:282] 0 containers: []
	W1206 10:54:15.986274  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:15.986279  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:15.986339  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:16.015947  405191 cri.go:89] found id: ""
	I1206 10:54:16.015961  405191 logs.go:282] 0 containers: []
	W1206 10:54:16.015968  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:16.015973  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:16.016038  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:16.046583  405191 cri.go:89] found id: ""
	I1206 10:54:16.046597  405191 logs.go:282] 0 containers: []
	W1206 10:54:16.046604  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:16.046609  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:16.046673  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:16.073401  405191 cri.go:89] found id: ""
	I1206 10:54:16.073415  405191 logs.go:282] 0 containers: []
	W1206 10:54:16.073422  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:16.073428  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:16.073489  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:16.099301  405191 cri.go:89] found id: ""
	I1206 10:54:16.099315  405191 logs.go:282] 0 containers: []
	W1206 10:54:16.099321  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:16.099327  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:16.099409  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:16.132045  405191 cri.go:89] found id: ""
	I1206 10:54:16.132060  405191 logs.go:282] 0 containers: []
	W1206 10:54:16.132067  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:16.132075  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:16.132086  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:16.201949  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:16.191660   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:16.193954   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:16.194695   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:16.196370   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:16.196866   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:16.191660   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:16.193954   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:16.194695   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:16.196370   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:16.196866   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:16.201962  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:16.201972  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:16.277750  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:16.277769  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:16.311130  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:16.311148  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:16.377771  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:16.377793  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:18.893108  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:18.903283  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:18.903345  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:18.927861  405191 cri.go:89] found id: ""
	I1206 10:54:18.927875  405191 logs.go:282] 0 containers: []
	W1206 10:54:18.927882  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:18.927887  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:18.927945  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:18.953460  405191 cri.go:89] found id: ""
	I1206 10:54:18.953474  405191 logs.go:282] 0 containers: []
	W1206 10:54:18.953482  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:18.953486  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:18.953563  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:18.981063  405191 cri.go:89] found id: ""
	I1206 10:54:18.981077  405191 logs.go:282] 0 containers: []
	W1206 10:54:18.981088  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:18.981093  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:18.981154  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:19.011134  405191 cri.go:89] found id: ""
	I1206 10:54:19.011148  405191 logs.go:282] 0 containers: []
	W1206 10:54:19.011156  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:19.011161  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:19.011221  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:19.037866  405191 cri.go:89] found id: ""
	I1206 10:54:19.037889  405191 logs.go:282] 0 containers: []
	W1206 10:54:19.037895  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:19.037901  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:19.037972  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:19.067672  405191 cri.go:89] found id: ""
	I1206 10:54:19.067685  405191 logs.go:282] 0 containers: []
	W1206 10:54:19.067692  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:19.067697  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:19.067753  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:19.092891  405191 cri.go:89] found id: ""
	I1206 10:54:19.092906  405191 logs.go:282] 0 containers: []
	W1206 10:54:19.092913  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:19.092921  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:19.092933  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:19.158186  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:19.149512   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:19.150168   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:19.151831   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:19.152364   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:19.154131   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:19.149512   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:19.150168   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:19.151831   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:19.152364   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:19.154131   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:19.158196  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:19.158209  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:19.231681  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:19.231701  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:19.267680  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:19.267704  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:19.341777  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:19.341796  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:21.856895  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:21.867600  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:21.867659  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:21.899562  405191 cri.go:89] found id: ""
	I1206 10:54:21.899576  405191 logs.go:282] 0 containers: []
	W1206 10:54:21.899583  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:21.899589  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:21.899647  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:21.924433  405191 cri.go:89] found id: ""
	I1206 10:54:21.924446  405191 logs.go:282] 0 containers: []
	W1206 10:54:21.924454  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:21.924459  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:21.924517  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:21.949461  405191 cri.go:89] found id: ""
	I1206 10:54:21.949476  405191 logs.go:282] 0 containers: []
	W1206 10:54:21.949482  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:21.949493  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:21.949550  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:21.976373  405191 cri.go:89] found id: ""
	I1206 10:54:21.976388  405191 logs.go:282] 0 containers: []
	W1206 10:54:21.976396  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:21.976401  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:21.976457  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:22.025051  405191 cri.go:89] found id: ""
	I1206 10:54:22.025074  405191 logs.go:282] 0 containers: []
	W1206 10:54:22.025095  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:22.025101  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:22.025214  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:22.054790  405191 cri.go:89] found id: ""
	I1206 10:54:22.054804  405191 logs.go:282] 0 containers: []
	W1206 10:54:22.054811  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:22.054817  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:22.054873  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:22.081220  405191 cri.go:89] found id: ""
	I1206 10:54:22.081235  405191 logs.go:282] 0 containers: []
	W1206 10:54:22.081242  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:22.081251  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:22.081262  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:22.147339  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:22.147359  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:22.162252  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:22.162268  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:22.233807  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:22.219327   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:22.220102   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:22.225452   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:22.227256   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:22.228864   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:22.219327   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:22.220102   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:22.225452   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:22.227256   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:22.228864   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:22.233819  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:22.233838  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:22.312101  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:22.312123  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:24.852672  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:24.863210  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:24.863271  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:24.889673  405191 cri.go:89] found id: ""
	I1206 10:54:24.889687  405191 logs.go:282] 0 containers: []
	W1206 10:54:24.889695  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:24.889700  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:24.889758  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:24.920816  405191 cri.go:89] found id: ""
	I1206 10:54:24.920830  405191 logs.go:282] 0 containers: []
	W1206 10:54:24.920837  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:24.920842  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:24.920900  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:24.945958  405191 cri.go:89] found id: ""
	I1206 10:54:24.945972  405191 logs.go:282] 0 containers: []
	W1206 10:54:24.945980  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:24.945985  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:24.946046  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:24.970886  405191 cri.go:89] found id: ""
	I1206 10:54:24.970900  405191 logs.go:282] 0 containers: []
	W1206 10:54:24.970907  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:24.970912  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:24.970970  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:25.000298  405191 cri.go:89] found id: ""
	I1206 10:54:25.000315  405191 logs.go:282] 0 containers: []
	W1206 10:54:25.000323  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:25.000329  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:25.000399  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:25.033867  405191 cri.go:89] found id: ""
	I1206 10:54:25.033882  405191 logs.go:282] 0 containers: []
	W1206 10:54:25.033890  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:25.033895  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:25.033960  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:25.060149  405191 cri.go:89] found id: ""
	I1206 10:54:25.060162  405191 logs.go:282] 0 containers: []
	W1206 10:54:25.060169  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:25.060177  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:25.060188  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:25.128734  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:25.120144   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:25.120771   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:25.122547   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:25.123145   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:25.124861   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:25.120144   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:25.120771   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:25.122547   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:25.123145   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:25.124861   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:25.128746  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:25.128757  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:25.198421  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:25.198443  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:25.239321  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:25.239341  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:25.316857  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:25.316878  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:27.833465  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:27.844470  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:27.844528  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:27.870606  405191 cri.go:89] found id: ""
	I1206 10:54:27.870621  405191 logs.go:282] 0 containers: []
	W1206 10:54:27.870628  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:27.870633  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:27.870693  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:27.894893  405191 cri.go:89] found id: ""
	I1206 10:54:27.894906  405191 logs.go:282] 0 containers: []
	W1206 10:54:27.894913  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:27.894918  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:27.894973  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:27.920116  405191 cri.go:89] found id: ""
	I1206 10:54:27.920129  405191 logs.go:282] 0 containers: []
	W1206 10:54:27.920136  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:27.920142  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:27.920201  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:27.946774  405191 cri.go:89] found id: ""
	I1206 10:54:27.946788  405191 logs.go:282] 0 containers: []
	W1206 10:54:27.946798  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:27.946806  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:27.946869  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:27.973164  405191 cri.go:89] found id: ""
	I1206 10:54:27.973178  405191 logs.go:282] 0 containers: []
	W1206 10:54:27.973185  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:27.973190  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:27.973247  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:28.005225  405191 cri.go:89] found id: ""
	I1206 10:54:28.005240  405191 logs.go:282] 0 containers: []
	W1206 10:54:28.005248  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:28.005255  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:28.005329  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:28.034341  405191 cri.go:89] found id: ""
	I1206 10:54:28.034355  405191 logs.go:282] 0 containers: []
	W1206 10:54:28.034362  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:28.034370  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:28.034381  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:28.107547  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:28.107567  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:28.136561  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:28.136578  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:28.206187  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:28.206206  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:28.224556  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:28.224580  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:28.311110  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:28.302520   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:28.303509   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:28.305089   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:28.305582   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:28.307158   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:28.302520   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:28.303509   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:28.305089   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:28.305582   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:28.307158   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:30.811550  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:30.821711  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:30.821769  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:30.850956  405191 cri.go:89] found id: ""
	I1206 10:54:30.850970  405191 logs.go:282] 0 containers: []
	W1206 10:54:30.850979  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:30.850984  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:30.851045  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:30.876542  405191 cri.go:89] found id: ""
	I1206 10:54:30.876558  405191 logs.go:282] 0 containers: []
	W1206 10:54:30.876565  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:30.876571  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:30.876630  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:30.902552  405191 cri.go:89] found id: ""
	I1206 10:54:30.902566  405191 logs.go:282] 0 containers: []
	W1206 10:54:30.902573  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:30.902578  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:30.902635  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:30.928737  405191 cri.go:89] found id: ""
	I1206 10:54:30.928751  405191 logs.go:282] 0 containers: []
	W1206 10:54:30.928758  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:30.928764  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:30.928829  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:30.954309  405191 cri.go:89] found id: ""
	I1206 10:54:30.954323  405191 logs.go:282] 0 containers: []
	W1206 10:54:30.954330  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:30.954335  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:30.954394  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:30.980239  405191 cri.go:89] found id: ""
	I1206 10:54:30.980251  405191 logs.go:282] 0 containers: []
	W1206 10:54:30.980258  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:30.980263  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:30.980319  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:31.010962  405191 cri.go:89] found id: ""
	I1206 10:54:31.010977  405191 logs.go:282] 0 containers: []
	W1206 10:54:31.010985  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:31.010994  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:31.011006  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:31.078259  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:31.069995   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:31.070621   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:31.072176   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:31.072646   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:31.074155   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:31.069995   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:31.070621   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:31.072176   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:31.072646   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:31.074155   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:31.078270  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:31.078282  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:31.147428  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:31.147455  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:31.181028  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:31.181045  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:31.253555  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:31.253574  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:33.770610  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:33.781236  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:33.781299  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:33.806547  405191 cri.go:89] found id: ""
	I1206 10:54:33.806561  405191 logs.go:282] 0 containers: []
	W1206 10:54:33.806568  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:33.806574  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:33.806632  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:33.832359  405191 cri.go:89] found id: ""
	I1206 10:54:33.832371  405191 logs.go:282] 0 containers: []
	W1206 10:54:33.832379  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:33.832383  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:33.832442  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:33.857194  405191 cri.go:89] found id: ""
	I1206 10:54:33.857207  405191 logs.go:282] 0 containers: []
	W1206 10:54:33.857214  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:33.857219  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:33.857280  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:33.886113  405191 cri.go:89] found id: ""
	I1206 10:54:33.886126  405191 logs.go:282] 0 containers: []
	W1206 10:54:33.886133  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:33.886138  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:33.886194  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:33.914351  405191 cri.go:89] found id: ""
	I1206 10:54:33.914364  405191 logs.go:282] 0 containers: []
	W1206 10:54:33.914371  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:33.914376  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:33.914438  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:33.939584  405191 cri.go:89] found id: ""
	I1206 10:54:33.939598  405191 logs.go:282] 0 containers: []
	W1206 10:54:33.939605  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:33.939611  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:33.939683  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:33.965467  405191 cri.go:89] found id: ""
	I1206 10:54:33.965481  405191 logs.go:282] 0 containers: []
	W1206 10:54:33.965488  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:33.965496  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:33.965506  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:34.034434  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:34.034456  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:34.068244  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:34.068263  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:34.136528  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:34.136548  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:34.151695  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:34.151713  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:34.237655  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:34.227619   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:34.228750   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:34.231547   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:34.232096   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:34.233598   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:34.227619   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:34.228750   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:34.231547   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:34.232096   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:34.233598   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:36.737997  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:36.748632  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:36.748739  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:36.774541  405191 cri.go:89] found id: ""
	I1206 10:54:36.774554  405191 logs.go:282] 0 containers: []
	W1206 10:54:36.774563  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:36.774568  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:36.774628  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:36.804563  405191 cri.go:89] found id: ""
	I1206 10:54:36.804577  405191 logs.go:282] 0 containers: []
	W1206 10:54:36.804585  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:36.804590  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:36.804649  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:36.829295  405191 cri.go:89] found id: ""
	I1206 10:54:36.829309  405191 logs.go:282] 0 containers: []
	W1206 10:54:36.829316  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:36.829322  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:36.829384  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:36.854740  405191 cri.go:89] found id: ""
	I1206 10:54:36.854754  405191 logs.go:282] 0 containers: []
	W1206 10:54:36.854761  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:36.854767  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:36.854827  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:36.879535  405191 cri.go:89] found id: ""
	I1206 10:54:36.879548  405191 logs.go:282] 0 containers: []
	W1206 10:54:36.879555  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:36.879560  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:36.879621  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:36.908804  405191 cri.go:89] found id: ""
	I1206 10:54:36.908818  405191 logs.go:282] 0 containers: []
	W1206 10:54:36.908826  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:36.908831  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:36.908891  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:36.935290  405191 cri.go:89] found id: ""
	I1206 10:54:36.935312  405191 logs.go:282] 0 containers: []
	W1206 10:54:36.935320  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:36.935328  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:36.935338  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:37.005221  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:37.005253  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:37.023044  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:37.023070  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:37.090033  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:37.082290   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:37.082864   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:37.084384   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:37.084721   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:37.086198   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:37.082290   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:37.082864   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:37.084384   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:37.084721   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:37.086198   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:37.090044  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:37.090055  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:37.158891  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:37.158911  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:39.688451  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:39.698958  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:39.699020  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:39.725003  405191 cri.go:89] found id: ""
	I1206 10:54:39.725017  405191 logs.go:282] 0 containers: []
	W1206 10:54:39.725024  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:39.725029  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:39.725086  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:39.750186  405191 cri.go:89] found id: ""
	I1206 10:54:39.750208  405191 logs.go:282] 0 containers: []
	W1206 10:54:39.750215  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:39.750221  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:39.750286  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:39.777512  405191 cri.go:89] found id: ""
	I1206 10:54:39.777527  405191 logs.go:282] 0 containers: []
	W1206 10:54:39.777534  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:39.777539  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:39.777598  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:39.805960  405191 cri.go:89] found id: ""
	I1206 10:54:39.805974  405191 logs.go:282] 0 containers: []
	W1206 10:54:39.805981  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:39.805987  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:39.806048  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:39.832070  405191 cri.go:89] found id: ""
	I1206 10:54:39.832086  405191 logs.go:282] 0 containers: []
	W1206 10:54:39.832093  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:39.832099  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:39.832162  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:39.856950  405191 cri.go:89] found id: ""
	I1206 10:54:39.856964  405191 logs.go:282] 0 containers: []
	W1206 10:54:39.856970  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:39.856976  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:39.857034  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:39.882830  405191 cri.go:89] found id: ""
	I1206 10:54:39.882844  405191 logs.go:282] 0 containers: []
	W1206 10:54:39.882851  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:39.882859  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:39.882869  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:39.948996  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:39.949016  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:39.964250  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:39.964266  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:40.040200  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:40.026040   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:40.026898   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:40.028891   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:40.029963   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:40.030727   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:40.026040   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:40.026898   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:40.028891   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:40.029963   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:40.030727   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:40.040211  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:40.040222  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:40.112805  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:40.112828  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:42.645898  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:42.656339  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:42.656399  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:42.681441  405191 cri.go:89] found id: ""
	I1206 10:54:42.681456  405191 logs.go:282] 0 containers: []
	W1206 10:54:42.681462  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:42.681468  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:42.681529  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:42.706692  405191 cri.go:89] found id: ""
	I1206 10:54:42.706706  405191 logs.go:282] 0 containers: []
	W1206 10:54:42.706713  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:42.706718  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:42.706781  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:42.734049  405191 cri.go:89] found id: ""
	I1206 10:54:42.734063  405191 logs.go:282] 0 containers: []
	W1206 10:54:42.734070  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:42.734075  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:42.734136  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:42.759095  405191 cri.go:89] found id: ""
	I1206 10:54:42.759115  405191 logs.go:282] 0 containers: []
	W1206 10:54:42.759123  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:42.759128  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:42.759190  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:42.786861  405191 cri.go:89] found id: ""
	I1206 10:54:42.786875  405191 logs.go:282] 0 containers: []
	W1206 10:54:42.786882  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:42.786887  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:42.786949  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:42.817648  405191 cri.go:89] found id: ""
	I1206 10:54:42.817663  405191 logs.go:282] 0 containers: []
	W1206 10:54:42.817670  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:42.817675  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:42.817738  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:42.844223  405191 cri.go:89] found id: ""
	I1206 10:54:42.844245  405191 logs.go:282] 0 containers: []
	W1206 10:54:42.844253  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:42.844261  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:42.844278  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:42.914866  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:42.904424   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:42.904903   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:42.907237   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:42.908578   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:42.909360   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:42.904424   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:42.904903   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:42.907237   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:42.908578   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:42.909360   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:42.914877  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:42.914888  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:42.987160  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:42.987181  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:43.017513  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:43.017529  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:43.084573  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:43.084595  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:45.600685  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:45.611239  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:45.611299  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:45.635510  405191 cri.go:89] found id: ""
	I1206 10:54:45.635525  405191 logs.go:282] 0 containers: []
	W1206 10:54:45.635532  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:45.635538  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:45.635604  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:45.664995  405191 cri.go:89] found id: ""
	I1206 10:54:45.665008  405191 logs.go:282] 0 containers: []
	W1206 10:54:45.665015  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:45.665020  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:45.665077  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:45.691036  405191 cri.go:89] found id: ""
	I1206 10:54:45.691050  405191 logs.go:282] 0 containers: []
	W1206 10:54:45.691057  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:45.691062  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:45.691120  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:45.716374  405191 cri.go:89] found id: ""
	I1206 10:54:45.716388  405191 logs.go:282] 0 containers: []
	W1206 10:54:45.716395  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:45.716400  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:45.716461  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:45.742083  405191 cri.go:89] found id: ""
	I1206 10:54:45.742097  405191 logs.go:282] 0 containers: []
	W1206 10:54:45.742105  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:45.742110  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:45.742177  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:45.767269  405191 cri.go:89] found id: ""
	I1206 10:54:45.767282  405191 logs.go:282] 0 containers: []
	W1206 10:54:45.767290  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:45.767295  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:45.767352  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:45.793130  405191 cri.go:89] found id: ""
	I1206 10:54:45.793144  405191 logs.go:282] 0 containers: []
	W1206 10:54:45.793151  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:45.793158  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:45.793169  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:45.822623  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:45.822639  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:45.889014  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:45.889036  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:45.903697  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:45.903713  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:45.967833  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:45.959169   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:45.960025   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:45.961643   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:45.962228   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:45.963959   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:45.959169   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:45.960025   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:45.961643   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:45.962228   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:45.963959   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:45.967843  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:45.967854  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:48.539593  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:48.549488  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:48.549547  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:48.578962  405191 cri.go:89] found id: ""
	I1206 10:54:48.578976  405191 logs.go:282] 0 containers: []
	W1206 10:54:48.578983  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:48.578989  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:48.579060  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:48.604320  405191 cri.go:89] found id: ""
	I1206 10:54:48.604335  405191 logs.go:282] 0 containers: []
	W1206 10:54:48.604342  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:48.604347  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:48.604407  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:48.630562  405191 cri.go:89] found id: ""
	I1206 10:54:48.630575  405191 logs.go:282] 0 containers: []
	W1206 10:54:48.630583  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:48.630588  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:48.630645  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:48.659186  405191 cri.go:89] found id: ""
	I1206 10:54:48.659200  405191 logs.go:282] 0 containers: []
	W1206 10:54:48.659207  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:48.659218  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:48.659278  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:48.686349  405191 cri.go:89] found id: ""
	I1206 10:54:48.686363  405191 logs.go:282] 0 containers: []
	W1206 10:54:48.686371  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:48.686376  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:48.686433  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:48.712958  405191 cri.go:89] found id: ""
	I1206 10:54:48.712973  405191 logs.go:282] 0 containers: []
	W1206 10:54:48.712980  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:48.712985  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:48.713045  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:48.738763  405191 cri.go:89] found id: ""
	I1206 10:54:48.738777  405191 logs.go:282] 0 containers: []
	W1206 10:54:48.738783  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:48.738791  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:48.738801  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:48.753416  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:48.753431  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:48.818598  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:48.810121   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:48.810830   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:48.812598   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:48.813183   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:48.814760   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:48.810121   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:48.810830   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:48.812598   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:48.813183   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:48.814760   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:48.818609  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:48.818620  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:48.888023  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:48.888043  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:48.917094  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:48.917110  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:51.485627  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:51.497092  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:51.497157  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:51.529254  405191 cri.go:89] found id: ""
	I1206 10:54:51.529268  405191 logs.go:282] 0 containers: []
	W1206 10:54:51.529275  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:51.529281  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:51.529340  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:51.555292  405191 cri.go:89] found id: ""
	I1206 10:54:51.555305  405191 logs.go:282] 0 containers: []
	W1206 10:54:51.555312  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:51.555316  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:51.555390  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:51.580443  405191 cri.go:89] found id: ""
	I1206 10:54:51.580458  405191 logs.go:282] 0 containers: []
	W1206 10:54:51.580465  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:51.580470  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:51.580529  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:51.605907  405191 cri.go:89] found id: ""
	I1206 10:54:51.605921  405191 logs.go:282] 0 containers: []
	W1206 10:54:51.605928  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:51.605933  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:51.605991  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:51.630731  405191 cri.go:89] found id: ""
	I1206 10:54:51.630745  405191 logs.go:282] 0 containers: []
	W1206 10:54:51.630752  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:51.630757  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:51.630816  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:51.655906  405191 cri.go:89] found id: ""
	I1206 10:54:51.655919  405191 logs.go:282] 0 containers: []
	W1206 10:54:51.655926  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:51.655931  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:51.655987  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:51.681242  405191 cri.go:89] found id: ""
	I1206 10:54:51.681256  405191 logs.go:282] 0 containers: []
	W1206 10:54:51.681267  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:51.681275  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:51.681285  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:51.750829  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:51.750849  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:51.766064  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:51.766080  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:51.831905  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:51.823637   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:51.824299   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:51.825840   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:51.826394   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:51.827960   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:51.823637   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:51.824299   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:51.825840   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:51.826394   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:51.827960   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:51.831915  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:51.831925  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:51.901462  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:51.901484  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:54.431319  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:54.441623  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:54.441686  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:54.470441  405191 cri.go:89] found id: ""
	I1206 10:54:54.470456  405191 logs.go:282] 0 containers: []
	W1206 10:54:54.470463  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:54.470469  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:54.470527  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:54.505844  405191 cri.go:89] found id: ""
	I1206 10:54:54.505858  405191 logs.go:282] 0 containers: []
	W1206 10:54:54.505865  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:54.505870  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:54.505931  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:54.540765  405191 cri.go:89] found id: ""
	I1206 10:54:54.540779  405191 logs.go:282] 0 containers: []
	W1206 10:54:54.540786  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:54.540791  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:54.540859  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:54.568534  405191 cri.go:89] found id: ""
	I1206 10:54:54.568559  405191 logs.go:282] 0 containers: []
	W1206 10:54:54.568566  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:54.568571  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:54.568631  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:54.598488  405191 cri.go:89] found id: ""
	I1206 10:54:54.598501  405191 logs.go:282] 0 containers: []
	W1206 10:54:54.598508  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:54.598513  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:54.598573  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:54.625601  405191 cri.go:89] found id: ""
	I1206 10:54:54.625615  405191 logs.go:282] 0 containers: []
	W1206 10:54:54.625622  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:54.625627  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:54.625684  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:54.651039  405191 cri.go:89] found id: ""
	I1206 10:54:54.651053  405191 logs.go:282] 0 containers: []
	W1206 10:54:54.651069  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:54.651077  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:54.651088  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:54.721711  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:54.712700   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:54.713574   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:54.715366   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:54.715761   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:54.717298   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:54.712700   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:54.713574   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:54.715366   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:54.715761   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:54.717298   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:54.721724  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:54.721734  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:54.793778  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:54.793803  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:54.825565  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:54.825580  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:54.891107  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:54.891127  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:57.406177  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:57.416168  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:57.416231  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:57.444260  405191 cri.go:89] found id: ""
	I1206 10:54:57.444274  405191 logs.go:282] 0 containers: []
	W1206 10:54:57.444281  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:57.444286  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:57.444352  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:57.473921  405191 cri.go:89] found id: ""
	I1206 10:54:57.473935  405191 logs.go:282] 0 containers: []
	W1206 10:54:57.473942  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:57.473947  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:57.474006  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:57.507969  405191 cri.go:89] found id: ""
	I1206 10:54:57.507983  405191 logs.go:282] 0 containers: []
	W1206 10:54:57.507990  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:57.507995  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:57.508057  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:57.536405  405191 cri.go:89] found id: ""
	I1206 10:54:57.536420  405191 logs.go:282] 0 containers: []
	W1206 10:54:57.536428  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:57.536433  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:57.536502  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:57.564180  405191 cri.go:89] found id: ""
	I1206 10:54:57.564194  405191 logs.go:282] 0 containers: []
	W1206 10:54:57.564201  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:57.564206  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:57.564271  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:57.594665  405191 cri.go:89] found id: ""
	I1206 10:54:57.594679  405191 logs.go:282] 0 containers: []
	W1206 10:54:57.594687  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:57.594692  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:57.594751  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:57.627345  405191 cri.go:89] found id: ""
	I1206 10:54:57.627360  405191 logs.go:282] 0 containers: []
	W1206 10:54:57.627367  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:57.627398  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:57.627409  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:57.694026  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:57.694046  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:57.708621  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:57.708636  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:57.772743  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:57.764569   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:57.765305   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:57.766828   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:57.767291   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:57.768789   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:57.764569   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:57.765305   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:57.766828   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:57.767291   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:57.768789   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:57.772753  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:57.772764  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:57.841816  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:57.841836  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:00.375636  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:00.396560  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:00.396634  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:00.458455  405191 cri.go:89] found id: ""
	I1206 10:55:00.458471  405191 logs.go:282] 0 containers: []
	W1206 10:55:00.458479  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:00.458485  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:00.458553  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:00.497287  405191 cri.go:89] found id: ""
	I1206 10:55:00.497304  405191 logs.go:282] 0 containers: []
	W1206 10:55:00.497311  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:00.497317  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:00.497382  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:00.531076  405191 cri.go:89] found id: ""
	I1206 10:55:00.531092  405191 logs.go:282] 0 containers: []
	W1206 10:55:00.531099  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:00.531104  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:00.531172  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:00.567464  405191 cri.go:89] found id: ""
	I1206 10:55:00.567485  405191 logs.go:282] 0 containers: []
	W1206 10:55:00.567493  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:00.567499  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:00.567600  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:00.600497  405191 cri.go:89] found id: ""
	I1206 10:55:00.600512  405191 logs.go:282] 0 containers: []
	W1206 10:55:00.600520  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:00.600526  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:00.600596  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:00.648830  405191 cri.go:89] found id: ""
	I1206 10:55:00.648852  405191 logs.go:282] 0 containers: []
	W1206 10:55:00.648861  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:00.648868  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:00.648939  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:00.678773  405191 cri.go:89] found id: ""
	I1206 10:55:00.678789  405191 logs.go:282] 0 containers: []
	W1206 10:55:00.678797  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:00.678822  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:00.678834  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:00.748615  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:00.748637  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:00.764401  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:00.764420  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:00.836152  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:00.827231   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:00.828085   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:00.830005   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:00.830399   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:00.832026   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:00.827231   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:00.828085   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:00.830005   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:00.830399   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:00.832026   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:00.836163  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:00.836174  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:00.909732  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:00.909761  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:03.441095  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:03.451635  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:03.451701  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:03.486201  405191 cri.go:89] found id: ""
	I1206 10:55:03.486214  405191 logs.go:282] 0 containers: []
	W1206 10:55:03.486222  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:03.486226  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:03.486286  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:03.530153  405191 cri.go:89] found id: ""
	I1206 10:55:03.530167  405191 logs.go:282] 0 containers: []
	W1206 10:55:03.530174  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:03.530179  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:03.530243  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:03.559790  405191 cri.go:89] found id: ""
	I1206 10:55:03.559804  405191 logs.go:282] 0 containers: []
	W1206 10:55:03.559811  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:03.559816  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:03.559874  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:03.586392  405191 cri.go:89] found id: ""
	I1206 10:55:03.586406  405191 logs.go:282] 0 containers: []
	W1206 10:55:03.586413  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:03.586418  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:03.586477  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:03.612699  405191 cri.go:89] found id: ""
	I1206 10:55:03.612714  405191 logs.go:282] 0 containers: []
	W1206 10:55:03.612726  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:03.612732  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:03.612827  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:03.641895  405191 cri.go:89] found id: ""
	I1206 10:55:03.641909  405191 logs.go:282] 0 containers: []
	W1206 10:55:03.641916  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:03.641921  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:03.641978  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:03.668194  405191 cri.go:89] found id: ""
	I1206 10:55:03.668208  405191 logs.go:282] 0 containers: []
	W1206 10:55:03.668216  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:03.668224  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:03.668234  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:03.738567  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:03.738585  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:03.753715  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:03.753732  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:03.819356  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:03.811487   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:03.812006   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:03.813500   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:03.813921   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:03.815528   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:03.811487   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:03.812006   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:03.813500   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:03.813921   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:03.815528   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:03.819368  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:03.819393  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:03.888845  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:03.888866  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:06.421279  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:06.431630  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:06.431691  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:06.457432  405191 cri.go:89] found id: ""
	I1206 10:55:06.457446  405191 logs.go:282] 0 containers: []
	W1206 10:55:06.457453  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:06.457458  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:06.457525  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:06.498897  405191 cri.go:89] found id: ""
	I1206 10:55:06.498911  405191 logs.go:282] 0 containers: []
	W1206 10:55:06.498918  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:06.498923  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:06.498994  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:06.532288  405191 cri.go:89] found id: ""
	I1206 10:55:06.532320  405191 logs.go:282] 0 containers: []
	W1206 10:55:06.532328  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:06.532332  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:06.532403  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:06.558737  405191 cri.go:89] found id: ""
	I1206 10:55:06.558751  405191 logs.go:282] 0 containers: []
	W1206 10:55:06.558758  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:06.558764  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:06.558835  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:06.588791  405191 cri.go:89] found id: ""
	I1206 10:55:06.588805  405191 logs.go:282] 0 containers: []
	W1206 10:55:06.588813  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:06.588818  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:06.588887  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:06.615097  405191 cri.go:89] found id: ""
	I1206 10:55:06.615110  405191 logs.go:282] 0 containers: []
	W1206 10:55:06.615117  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:06.615122  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:06.615182  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:06.640273  405191 cri.go:89] found id: ""
	I1206 10:55:06.640297  405191 logs.go:282] 0 containers: []
	W1206 10:55:06.640305  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:06.640312  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:06.640323  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:06.709781  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:06.709800  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:06.724307  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:06.724323  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:06.788894  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:06.780020   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:06.780621   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:06.782266   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:06.782823   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:06.784380   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:06.780020   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:06.780621   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:06.782266   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:06.782823   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:06.784380   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:06.788903  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:06.788913  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:06.857942  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:06.857963  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:09.392819  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:09.402617  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:09.402675  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:09.429928  405191 cri.go:89] found id: ""
	I1206 10:55:09.429942  405191 logs.go:282] 0 containers: []
	W1206 10:55:09.429949  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:09.429955  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:09.430018  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:09.455893  405191 cri.go:89] found id: ""
	I1206 10:55:09.455907  405191 logs.go:282] 0 containers: []
	W1206 10:55:09.455913  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:09.455918  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:09.455975  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:09.492759  405191 cri.go:89] found id: ""
	I1206 10:55:09.492772  405191 logs.go:282] 0 containers: []
	W1206 10:55:09.492779  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:09.492784  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:09.492842  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:09.524405  405191 cri.go:89] found id: ""
	I1206 10:55:09.524418  405191 logs.go:282] 0 containers: []
	W1206 10:55:09.524425  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:09.524430  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:09.524488  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:09.555465  405191 cri.go:89] found id: ""
	I1206 10:55:09.555479  405191 logs.go:282] 0 containers: []
	W1206 10:55:09.555486  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:09.555491  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:09.555551  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:09.582561  405191 cri.go:89] found id: ""
	I1206 10:55:09.582575  405191 logs.go:282] 0 containers: []
	W1206 10:55:09.582582  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:09.582588  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:09.582646  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:09.608767  405191 cri.go:89] found id: ""
	I1206 10:55:09.608781  405191 logs.go:282] 0 containers: []
	W1206 10:55:09.608788  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:09.608796  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:09.608810  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:09.677518  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:09.677539  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:09.692935  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:09.692955  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:09.760066  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:09.750973   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:09.751783   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:09.753612   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:09.754387   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:09.755955   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:09.750973   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:09.751783   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:09.753612   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:09.754387   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:09.755955   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:09.760077  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:09.760087  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:09.829605  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:09.829626  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:12.359607  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:12.370647  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:12.370708  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:12.402338  405191 cri.go:89] found id: ""
	I1206 10:55:12.402353  405191 logs.go:282] 0 containers: []
	W1206 10:55:12.402361  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:12.402366  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:12.402435  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:12.428498  405191 cri.go:89] found id: ""
	I1206 10:55:12.428513  405191 logs.go:282] 0 containers: []
	W1206 10:55:12.428520  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:12.428525  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:12.428587  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:12.454311  405191 cri.go:89] found id: ""
	I1206 10:55:12.454325  405191 logs.go:282] 0 containers: []
	W1206 10:55:12.454333  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:12.454338  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:12.454399  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:12.493402  405191 cri.go:89] found id: ""
	I1206 10:55:12.493416  405191 logs.go:282] 0 containers: []
	W1206 10:55:12.493423  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:12.493429  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:12.493487  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:12.527015  405191 cri.go:89] found id: ""
	I1206 10:55:12.527029  405191 logs.go:282] 0 containers: []
	W1206 10:55:12.527036  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:12.527042  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:12.527103  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:12.556788  405191 cri.go:89] found id: ""
	I1206 10:55:12.556812  405191 logs.go:282] 0 containers: []
	W1206 10:55:12.556820  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:12.556825  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:12.556897  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:12.584336  405191 cri.go:89] found id: ""
	I1206 10:55:12.584350  405191 logs.go:282] 0 containers: []
	W1206 10:55:12.584357  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:12.584365  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:12.584376  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:12.614039  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:12.614055  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:12.680316  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:12.680338  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:12.696525  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:12.696542  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:12.760110  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:12.751882   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:12.752591   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:12.754143   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:12.754484   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:12.756046   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:12.751882   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:12.752591   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:12.754143   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:12.754484   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:12.756046   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:12.760120  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:12.760131  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:15.332168  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:15.342873  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:15.342950  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:15.371175  405191 cri.go:89] found id: ""
	I1206 10:55:15.371189  405191 logs.go:282] 0 containers: []
	W1206 10:55:15.371207  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:15.371212  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:15.371279  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:15.397085  405191 cri.go:89] found id: ""
	I1206 10:55:15.397100  405191 logs.go:282] 0 containers: []
	W1206 10:55:15.397107  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:15.397112  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:15.397171  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:15.422142  405191 cri.go:89] found id: ""
	I1206 10:55:15.422156  405191 logs.go:282] 0 containers: []
	W1206 10:55:15.422163  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:15.422174  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:15.422231  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:15.447127  405191 cri.go:89] found id: ""
	I1206 10:55:15.447141  405191 logs.go:282] 0 containers: []
	W1206 10:55:15.447148  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:15.447154  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:15.447212  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:15.477786  405191 cri.go:89] found id: ""
	I1206 10:55:15.477800  405191 logs.go:282] 0 containers: []
	W1206 10:55:15.477808  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:15.477813  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:15.477875  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:15.507270  405191 cri.go:89] found id: ""
	I1206 10:55:15.507285  405191 logs.go:282] 0 containers: []
	W1206 10:55:15.507292  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:15.507297  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:15.507360  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:15.536433  405191 cri.go:89] found id: ""
	I1206 10:55:15.536451  405191 logs.go:282] 0 containers: []
	W1206 10:55:15.536458  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:15.536470  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:15.536480  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:15.608040  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:15.608061  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:15.623617  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:15.623635  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:15.692548  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:15.684603   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:15.685140   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:15.686901   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:15.687564   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:15.688573   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:15.684603   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:15.685140   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:15.686901   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:15.687564   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:15.688573   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:15.692558  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:15.692581  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:15.760517  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:15.760537  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:18.289173  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:18.300544  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:18.300610  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:18.327678  405191 cri.go:89] found id: ""
	I1206 10:55:18.327692  405191 logs.go:282] 0 containers: []
	W1206 10:55:18.327699  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:18.327704  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:18.327764  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:18.353999  405191 cri.go:89] found id: ""
	I1206 10:55:18.354014  405191 logs.go:282] 0 containers: []
	W1206 10:55:18.354021  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:18.354026  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:18.354084  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:18.382276  405191 cri.go:89] found id: ""
	I1206 10:55:18.382291  405191 logs.go:282] 0 containers: []
	W1206 10:55:18.382298  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:18.382304  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:18.382365  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:18.410827  405191 cri.go:89] found id: ""
	I1206 10:55:18.410841  405191 logs.go:282] 0 containers: []
	W1206 10:55:18.410847  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:18.410852  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:18.410911  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:18.436138  405191 cri.go:89] found id: ""
	I1206 10:55:18.436160  405191 logs.go:282] 0 containers: []
	W1206 10:55:18.436167  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:18.436172  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:18.436233  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:18.462254  405191 cri.go:89] found id: ""
	I1206 10:55:18.462269  405191 logs.go:282] 0 containers: []
	W1206 10:55:18.462276  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:18.462283  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:18.462346  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:18.492347  405191 cri.go:89] found id: ""
	I1206 10:55:18.492362  405191 logs.go:282] 0 containers: []
	W1206 10:55:18.492369  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:18.492377  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:18.492388  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:18.509956  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:18.509973  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:18.581031  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:18.572020   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:18.572812   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:18.573929   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:18.574912   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:18.575787   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:18.572020   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:18.572812   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:18.573929   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:18.574912   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:18.575787   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:18.581041  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:18.581055  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:18.650942  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:18.650963  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:18.680668  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:18.680685  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:21.248379  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:21.258903  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:21.258982  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:21.286273  405191 cri.go:89] found id: ""
	I1206 10:55:21.286288  405191 logs.go:282] 0 containers: []
	W1206 10:55:21.286295  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:21.286300  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:21.286357  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:21.311824  405191 cri.go:89] found id: ""
	I1206 10:55:21.311841  405191 logs.go:282] 0 containers: []
	W1206 10:55:21.311851  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:21.311857  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:21.311923  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:21.338690  405191 cri.go:89] found id: ""
	I1206 10:55:21.338704  405191 logs.go:282] 0 containers: []
	W1206 10:55:21.338711  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:21.338716  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:21.338773  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:21.365841  405191 cri.go:89] found id: ""
	I1206 10:55:21.365855  405191 logs.go:282] 0 containers: []
	W1206 10:55:21.365862  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:21.365868  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:21.365926  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:21.396001  405191 cri.go:89] found id: ""
	I1206 10:55:21.396035  405191 logs.go:282] 0 containers: []
	W1206 10:55:21.396043  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:21.396049  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:21.396118  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:21.421823  405191 cri.go:89] found id: ""
	I1206 10:55:21.421837  405191 logs.go:282] 0 containers: []
	W1206 10:55:21.421856  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:21.421862  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:21.421934  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:21.449590  405191 cri.go:89] found id: ""
	I1206 10:55:21.449604  405191 logs.go:282] 0 containers: []
	W1206 10:55:21.449611  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:21.449619  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:21.449631  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:21.464618  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:21.464634  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:21.543901  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:21.526696   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:21.535561   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:21.536267   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:21.537985   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:21.538498   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:21.526696   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:21.535561   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:21.536267   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:21.537985   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:21.538498   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:21.543913  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:21.543926  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:21.614646  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:21.614669  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:21.645809  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:21.645825  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:24.214037  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:24.226008  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:24.226071  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:24.252473  405191 cri.go:89] found id: ""
	I1206 10:55:24.252487  405191 logs.go:282] 0 containers: []
	W1206 10:55:24.252495  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:24.252500  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:24.252560  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:24.280242  405191 cri.go:89] found id: ""
	I1206 10:55:24.280256  405191 logs.go:282] 0 containers: []
	W1206 10:55:24.280263  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:24.280268  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:24.280328  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:24.307083  405191 cri.go:89] found id: ""
	I1206 10:55:24.307098  405191 logs.go:282] 0 containers: []
	W1206 10:55:24.307105  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:24.307111  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:24.307181  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:24.333215  405191 cri.go:89] found id: ""
	I1206 10:55:24.333230  405191 logs.go:282] 0 containers: []
	W1206 10:55:24.333239  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:24.333245  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:24.333312  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:24.364248  405191 cri.go:89] found id: ""
	I1206 10:55:24.364262  405191 logs.go:282] 0 containers: []
	W1206 10:55:24.364269  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:24.364275  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:24.364340  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:24.392539  405191 cri.go:89] found id: ""
	I1206 10:55:24.392554  405191 logs.go:282] 0 containers: []
	W1206 10:55:24.392561  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:24.392567  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:24.392631  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:24.419045  405191 cri.go:89] found id: ""
	I1206 10:55:24.419059  405191 logs.go:282] 0 containers: []
	W1206 10:55:24.419066  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:24.419074  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:24.419084  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:24.485101  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:24.485123  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:24.506235  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:24.506258  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:24.586208  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:24.577740   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:24.578227   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:24.579907   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:24.580253   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:24.581928   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:24.577740   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:24.578227   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:24.579907   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:24.580253   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:24.581928   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:24.586218  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:24.586230  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:24.654219  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:24.654241  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:27.183198  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:27.194048  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:27.194116  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:27.223948  405191 cri.go:89] found id: ""
	I1206 10:55:27.223962  405191 logs.go:282] 0 containers: []
	W1206 10:55:27.223969  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:27.223974  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:27.224033  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:27.255792  405191 cri.go:89] found id: ""
	I1206 10:55:27.255807  405191 logs.go:282] 0 containers: []
	W1206 10:55:27.255814  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:27.255819  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:27.255882  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:27.285352  405191 cri.go:89] found id: ""
	I1206 10:55:27.285365  405191 logs.go:282] 0 containers: []
	W1206 10:55:27.285373  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:27.285380  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:27.285438  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:27.311572  405191 cri.go:89] found id: ""
	I1206 10:55:27.311599  405191 logs.go:282] 0 containers: []
	W1206 10:55:27.311606  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:27.311612  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:27.311684  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:27.337727  405191 cri.go:89] found id: ""
	I1206 10:55:27.337741  405191 logs.go:282] 0 containers: []
	W1206 10:55:27.337747  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:27.337753  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:27.337812  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:27.363513  405191 cri.go:89] found id: ""
	I1206 10:55:27.363527  405191 logs.go:282] 0 containers: []
	W1206 10:55:27.363534  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:27.363539  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:27.363611  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:27.390072  405191 cri.go:89] found id: ""
	I1206 10:55:27.390100  405191 logs.go:282] 0 containers: []
	W1206 10:55:27.390107  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:27.390115  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:27.390130  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:27.456548  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:27.456567  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:27.472626  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:27.472642  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:27.554055  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:27.545736   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:27.546254   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:27.548095   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:27.548446   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:27.550070   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:27.545736   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:27.546254   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:27.548095   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:27.548446   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:27.550070   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:27.554065  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:27.554076  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:27.622961  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:27.622984  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:30.156731  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:30.168052  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:30.168115  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:30.195190  405191 cri.go:89] found id: ""
	I1206 10:55:30.195205  405191 logs.go:282] 0 containers: []
	W1206 10:55:30.195237  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:30.195243  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:30.195315  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:30.222581  405191 cri.go:89] found id: ""
	I1206 10:55:30.222615  405191 logs.go:282] 0 containers: []
	W1206 10:55:30.222622  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:30.222628  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:30.222697  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:30.251144  405191 cri.go:89] found id: ""
	I1206 10:55:30.251162  405191 logs.go:282] 0 containers: []
	W1206 10:55:30.251173  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:30.251178  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:30.251280  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:30.282704  405191 cri.go:89] found id: ""
	I1206 10:55:30.282731  405191 logs.go:282] 0 containers: []
	W1206 10:55:30.282739  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:30.282744  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:30.282818  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:30.308787  405191 cri.go:89] found id: ""
	I1206 10:55:30.308802  405191 logs.go:282] 0 containers: []
	W1206 10:55:30.308809  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:30.308814  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:30.308881  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:30.334479  405191 cri.go:89] found id: ""
	I1206 10:55:30.334494  405191 logs.go:282] 0 containers: []
	W1206 10:55:30.334501  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:30.334507  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:30.334582  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:30.361350  405191 cri.go:89] found id: ""
	I1206 10:55:30.361365  405191 logs.go:282] 0 containers: []
	W1206 10:55:30.361372  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:30.361380  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:30.361390  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:30.438089  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:30.438120  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:30.453200  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:30.453217  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:30.539250  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:30.524592   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:30.527641   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:30.528089   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:30.529752   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:30.530427   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:30.524592   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:30.527641   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:30.528089   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:30.529752   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:30.530427   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:30.539272  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:30.539285  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:30.610101  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:30.610121  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:33.143484  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:33.153906  405191 kubeadm.go:602] duration metric: took 4m2.63956924s to restartPrimaryControlPlane
	W1206 10:55:33.153970  405191 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1206 10:55:33.154044  405191 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1206 10:55:33.564051  405191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:55:33.577264  405191 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 10:55:33.585285  405191 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:55:33.585343  405191 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:55:33.593207  405191 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:55:33.593217  405191 kubeadm.go:158] found existing configuration files:
	
	I1206 10:55:33.593284  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:55:33.601281  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:55:33.601338  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:55:33.609078  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:55:33.617336  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:55:33.617395  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:55:33.625100  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:55:33.633096  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:55:33.633153  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:55:33.640767  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:55:33.648692  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:55:33.648783  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:55:33.656355  405191 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:55:33.695114  405191 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 10:55:33.695495  405191 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:55:33.776558  405191 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:55:33.776622  405191 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:55:33.776656  405191 kubeadm.go:319] OS: Linux
	I1206 10:55:33.776700  405191 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:55:33.776747  405191 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:55:33.776793  405191 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:55:33.776839  405191 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:55:33.776886  405191 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:55:33.776933  405191 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:55:33.776976  405191 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:55:33.777023  405191 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:55:33.777067  405191 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:55:33.839562  405191 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:55:33.839700  405191 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:55:33.839825  405191 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:55:33.847872  405191 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:55:33.851528  405191 out.go:252]   - Generating certificates and keys ...
	I1206 10:55:33.851642  405191 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:55:33.851732  405191 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:55:33.851823  405191 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 10:55:33.851888  405191 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 10:55:33.851963  405191 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 10:55:33.852020  405191 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 10:55:33.852092  405191 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 10:55:33.852157  405191 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 10:55:33.852236  405191 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 10:55:33.852314  405191 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 10:55:33.852354  405191 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 10:55:33.852412  405191 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:55:34.131310  405191 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:55:34.288855  405191 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:55:34.553487  405191 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:55:35.148231  405191 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:55:35.211116  405191 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:55:35.211864  405191 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:55:35.214714  405191 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:55:35.218231  405191 out.go:252]   - Booting up control plane ...
	I1206 10:55:35.218330  405191 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:55:35.218406  405191 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:55:35.218472  405191 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:55:35.235870  405191 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:55:35.235976  405191 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:55:35.244902  405191 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:55:35.245320  405191 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:55:35.245379  405191 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:55:35.375634  405191 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:55:35.375747  405191 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:59:35.374512  405191 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000270227s
	I1206 10:59:35.374544  405191 kubeadm.go:319] 
	I1206 10:59:35.374605  405191 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 10:59:35.374643  405191 kubeadm.go:319] 	- The kubelet is not running
	I1206 10:59:35.374758  405191 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 10:59:35.374763  405191 kubeadm.go:319] 
	I1206 10:59:35.374876  405191 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 10:59:35.374910  405191 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 10:59:35.374942  405191 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 10:59:35.374945  405191 kubeadm.go:319] 
	I1206 10:59:35.380563  405191 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 10:59:35.380998  405191 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 10:59:35.381115  405191 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:59:35.381348  405191 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 10:59:35.381353  405191 kubeadm.go:319] 
	I1206 10:59:35.381420  405191 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1206 10:59:35.381523  405191 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000270227s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1206 10:59:35.381613  405191 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1206 10:59:35.796714  405191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:59:35.809334  405191 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:59:35.809388  405191 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:59:35.817444  405191 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:59:35.817452  405191 kubeadm.go:158] found existing configuration files:
	
	I1206 10:59:35.817502  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:59:35.825442  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:59:35.825501  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:59:35.833082  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:59:35.842093  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:59:35.842159  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:59:35.851759  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:59:35.860099  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:59:35.860161  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:59:35.867900  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:59:35.876130  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:59:35.876188  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:59:35.884013  405191 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:59:35.926383  405191 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 10:59:35.926438  405191 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:59:36.016832  405191 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:59:36.016925  405191 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:59:36.016974  405191 kubeadm.go:319] OS: Linux
	I1206 10:59:36.017019  405191 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:59:36.017071  405191 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:59:36.017119  405191 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:59:36.017173  405191 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:59:36.017220  405191 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:59:36.017277  405191 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:59:36.017339  405191 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:59:36.017401  405191 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:59:36.017447  405191 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:59:36.080832  405191 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:59:36.080951  405191 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:59:36.081048  405191 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:59:36.091906  405191 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:59:36.097223  405191 out.go:252]   - Generating certificates and keys ...
	I1206 10:59:36.097345  405191 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:59:36.097426  405191 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:59:36.097511  405191 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 10:59:36.097596  405191 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 10:59:36.097675  405191 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 10:59:36.097750  405191 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 10:59:36.097815  405191 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 10:59:36.097876  405191 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 10:59:36.097954  405191 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 10:59:36.098026  405191 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 10:59:36.098063  405191 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 10:59:36.098122  405191 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:59:36.705762  405191 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:59:36.885173  405191 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:59:37.204953  405191 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:59:37.715956  405191 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:59:37.848965  405191 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:59:37.849735  405191 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:59:37.853600  405191 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:59:37.856590  405191 out.go:252]   - Booting up control plane ...
	I1206 10:59:37.856698  405191 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:59:37.856819  405191 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:59:37.858671  405191 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:59:37.873039  405191 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:59:37.873143  405191 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:59:37.880838  405191 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:59:37.881129  405191 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:59:37.881370  405191 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:59:38.015956  405191 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:59:38.016070  405191 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 11:03:38.011572  405191 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000448393s
	I1206 11:03:38.011605  405191 kubeadm.go:319] 
	I1206 11:03:38.011721  405191 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 11:03:38.011777  405191 kubeadm.go:319] 	- The kubelet is not running
	I1206 11:03:38.012051  405191 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 11:03:38.012060  405191 kubeadm.go:319] 
	I1206 11:03:38.012421  405191 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 11:03:38.012573  405191 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 11:03:38.012628  405191 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 11:03:38.012633  405191 kubeadm.go:319] 
	I1206 11:03:38.018189  405191 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 11:03:38.018608  405191 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 11:03:38.018716  405191 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 11:03:38.018960  405191 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 11:03:38.018965  405191 kubeadm.go:319] 
	I1206 11:03:38.019033  405191 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 11:03:38.019089  405191 kubeadm.go:403] duration metric: took 12m7.551905569s to StartCluster
	I1206 11:03:38.019121  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:03:38.019191  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:03:38.048894  405191 cri.go:89] found id: ""
	I1206 11:03:38.048909  405191 logs.go:282] 0 containers: []
	W1206 11:03:38.048917  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:03:38.048922  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:03:38.049009  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:03:38.077125  405191 cri.go:89] found id: ""
	I1206 11:03:38.077141  405191 logs.go:282] 0 containers: []
	W1206 11:03:38.077149  405191 logs.go:284] No container was found matching "etcd"
	I1206 11:03:38.077154  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:03:38.077229  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:03:38.104859  405191 cri.go:89] found id: ""
	I1206 11:03:38.104873  405191 logs.go:282] 0 containers: []
	W1206 11:03:38.104881  405191 logs.go:284] No container was found matching "coredns"
	I1206 11:03:38.104886  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:03:38.104946  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:03:38.131268  405191 cri.go:89] found id: ""
	I1206 11:03:38.131282  405191 logs.go:282] 0 containers: []
	W1206 11:03:38.131289  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:03:38.131295  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:03:38.131356  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:03:38.161469  405191 cri.go:89] found id: ""
	I1206 11:03:38.161483  405191 logs.go:282] 0 containers: []
	W1206 11:03:38.161490  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:03:38.161495  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:03:38.161555  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:03:38.191440  405191 cri.go:89] found id: ""
	I1206 11:03:38.191454  405191 logs.go:282] 0 containers: []
	W1206 11:03:38.191461  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:03:38.191467  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:03:38.191536  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:03:38.219921  405191 cri.go:89] found id: ""
	I1206 11:03:38.219935  405191 logs.go:282] 0 containers: []
	W1206 11:03:38.219943  405191 logs.go:284] No container was found matching "kindnet"
	I1206 11:03:38.219951  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:03:38.219962  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:03:38.285137  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:03:38.277076   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:38.277519   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:38.279007   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:38.279647   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:38.281164   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:03:38.277076   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:38.277519   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:38.279007   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:38.279647   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:38.281164   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:03:38.285157  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:03:38.285169  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:03:38.355235  405191 logs.go:123] Gathering logs for container status ...
	I1206 11:03:38.355259  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:03:38.391661  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 11:03:38.391679  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:03:38.462714  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 11:03:38.462733  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1206 11:03:38.480853  405191 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000448393s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 11:03:38.480894  405191 out.go:285] * 
	W1206 11:03:38.480951  405191 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000448393s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 11:03:38.480964  405191 out.go:285] * 
	W1206 11:03:38.483093  405191 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 11:03:38.488282  405191 out.go:203] 
	W1206 11:03:38.491978  405191 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000448393s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 11:03:38.492089  405191 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 11:03:38.492161  405191 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 11:03:38.495164  405191 out.go:203] 
	
	
	==> CRI-O <==
	Dec 06 10:51:28 functional-196950 crio[9931]: time="2025-12-06T10:51:28.716514999Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 06 10:51:28 functional-196950 crio[9931]: time="2025-12-06T10:51:28.716552218Z" level=info msg="Starting seccomp notifier watcher"
	Dec 06 10:51:28 functional-196950 crio[9931]: time="2025-12-06T10:51:28.716594951Z" level=info msg="Create NRI interface"
	Dec 06 10:51:28 functional-196950 crio[9931]: time="2025-12-06T10:51:28.716700199Z" level=info msg="built-in NRI default validator is disabled"
	Dec 06 10:51:28 functional-196950 crio[9931]: time="2025-12-06T10:51:28.716709069Z" level=info msg="runtime interface created"
	Dec 06 10:51:28 functional-196950 crio[9931]: time="2025-12-06T10:51:28.716719834Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 06 10:51:28 functional-196950 crio[9931]: time="2025-12-06T10:51:28.716725832Z" level=info msg="runtime interface starting up..."
	Dec 06 10:51:28 functional-196950 crio[9931]: time="2025-12-06T10:51:28.716731789Z" level=info msg="starting plugins..."
	Dec 06 10:51:28 functional-196950 crio[9931]: time="2025-12-06T10:51:28.716744294Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 06 10:51:28 functional-196950 crio[9931]: time="2025-12-06T10:51:28.71680827Z" level=info msg="No systemd watchdog enabled"
	Dec 06 10:51:28 functional-196950 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 06 10:55:33 functional-196950 crio[9931]: time="2025-12-06T10:55:33.843321747Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=5f168690-0479-4b67-8846-d623c54570c0 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:55:33 functional-196950 crio[9931]: time="2025-12-06T10:55:33.844283422Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=5ac9537d-0142-4bb1-b0d9-019b296bd707 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:55:33 functional-196950 crio[9931]: time="2025-12-06T10:55:33.844830562Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=29c4c588-f2de-4c84-8064-807353d6179d name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:55:33 functional-196950 crio[9931]: time="2025-12-06T10:55:33.845343116Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=1da192f7-cfcc-42b9-837a-9f285f929dcd name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:55:33 functional-196950 crio[9931]: time="2025-12-06T10:55:33.845798415Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=320d875b-b045-44db-aacf-14add6cc927b name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:55:33 functional-196950 crio[9931]: time="2025-12-06T10:55:33.846245664Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=dd8834be-d268-43d9-854b-d34b54047169 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:55:33 functional-196950 crio[9931]: time="2025-12-06T10:55:33.846769058Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=25869f1e-412e-46d2-b706-063f94749122 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.084499828Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=f0fd0946-b323-435f-946c-e412850eb9c0 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.085495997Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=34b6bb47-44a4-4780-9567-04c497973fa7 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.08608111Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=e26253f3-5094-4fbe-b6d1-306f2e31fa9a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.086661177Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=f8a4ffa3-a2d3-4c05-ba97-fd167ad1ff4e name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.087187984Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=43819321-fbd2-4155-9f0b-c716c27fc9ce name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.087957288Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=aeb2085e-c4e7-4d42-9049-5042f515cbdb name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.088481658Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=0f01a640-f6a6-41dd-afc3-f5cae208f89a name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:03:39.776174   21245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:39.776859   21245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:39.777960   21245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:39.778492   21245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:39.779984   21245 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014752] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503231] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.065820] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.901896] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.944603] kauditd_printk_skb: 39 callbacks suppressed
	[Dec 6 09:04] hrtimer: interrupt took 32230230 ns
	[Dec 6 10:25] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 6 10:26] overlayfs: idmapped layers are currently not supported
	[  +0.066821] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 6 10:32] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:33] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:51] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 11:03:39 up  2:46,  0 user,  load average: 0.21, 0.20, 0.50
	Linux functional-196950 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 11:03:37 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:03:37 functional-196950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1801.
	Dec 06 11:03:37 functional-196950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:03:37 functional-196950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:03:37 functional-196950 kubelet[21051]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:03:37 functional-196950 kubelet[21051]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:03:37 functional-196950 kubelet[21051]: E1206 11:03:37.773995   21051 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:03:37 functional-196950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:03:37 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:03:38 functional-196950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1802.
	Dec 06 11:03:38 functional-196950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:03:38 functional-196950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:03:38 functional-196950 kubelet[21139]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:03:38 functional-196950 kubelet[21139]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:03:38 functional-196950 kubelet[21139]: E1206 11:03:38.555245   21139 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:03:38 functional-196950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:03:38 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:03:39 functional-196950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1803.
	Dec 06 11:03:39 functional-196950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:03:39 functional-196950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:03:39 functional-196950 kubelet[21161]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:03:39 functional-196950 kubelet[21161]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:03:39 functional-196950 kubelet[21161]: E1206 11:03:39.289983   21161 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:03:39 functional-196950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:03:39 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-196950 -n functional-196950
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-196950 -n functional-196950: exit status 2 (387.187045ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-196950" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (735.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-196950 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-196950 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (65.085303ms)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-196950 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-196950
helpers_test.go:243: (dbg) docker inspect functional-196950:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1",
	        "Created": "2025-12-06T10:36:45.201779678Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 393848,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T10:36:45.318229053Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/hostname",
	        "HostsPath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/hosts",
	        "LogPath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1-json.log",
	        "Name": "/functional-196950",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-196950:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-196950",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1",
	                "LowerDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1-init/diff:/var/lib/docker/overlay2/5011226d55616c9977b14c1fe617d1302fe59373df05ce8ec6e21b79143a1c57/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-196950",
	                "Source": "/var/lib/docker/volumes/functional-196950/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-196950",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-196950",
	                "name.minikube.sigs.k8s.io": "functional-196950",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9b8f961d55d7529aed7b841f2ac9f818c22ff12b8ad73f2d6bcee22656d9749a",
	            "SandboxKey": "/var/run/docker/netns/9b8f961d55d7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33158"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-196950": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:c1:40:2a:93:47",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a566bfdfd33a868cf61e5b18b36cbd55e9868f24cbb091e055ae606aeb8c6f03",
	                    "EndpointID": "452fe32bde0c42c4c35d700488ae93aeecc6c6a971ac6f1a8a492dbc4b328ed9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-196950",
	                        "d150aac7296d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-196950 -n functional-196950
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-196950 -n functional-196950: exit status 2 (323.832786ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 logs -n 25
E1206 11:03:41.814565  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-196950 logs -n 25: (1.033233926s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-205266 image ls --format yaml --alsologtostderr                                                                                        │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ ssh     │ functional-205266 ssh pgrep buildkitd                                                                                                             │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │                     │
	│ image   │ functional-205266 image ls --format json --alsologtostderr                                                                                        │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image   │ functional-205266 image build -t localhost/my-image:functional-205266 testdata/build --alsologtostderr                                            │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image   │ functional-205266 image ls --format table --alsologtostderr                                                                                       │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ image   │ functional-205266 image ls                                                                                                                        │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ delete  │ -p functional-205266                                                                                                                              │ functional-205266 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │ 06 Dec 25 10:36 UTC │
	│ start   │ -p functional-196950 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:36 UTC │                     │
	│ start   │ -p functional-196950 --alsologtostderr -v=8                                                                                                       │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:45 UTC │                     │
	│ cache   │ functional-196950 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ cache   │ functional-196950 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ cache   │ functional-196950 cache add registry.k8s.io/pause:latest                                                                                          │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ cache   │ functional-196950 cache add minikube-local-cache-test:functional-196950                                                                           │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ cache   │ functional-196950 cache delete minikube-local-cache-test:functional-196950                                                                        │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ cache   │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ ssh     │ functional-196950 ssh sudo crictl images                                                                                                          │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ ssh     │ functional-196950 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ ssh     │ functional-196950 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │                     │
	│ cache   │ functional-196950 cache reload                                                                                                                    │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ ssh     │ functional-196950 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ kubectl │ functional-196950 kubectl -- --context functional-196950 get pods                                                                                 │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │                     │
	│ start   │ -p functional-196950 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                          │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:51:25
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:51:25.658528  405191 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:51:25.659862  405191 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:51:25.659873  405191 out.go:374] Setting ErrFile to fd 2...
	I1206 10:51:25.659879  405191 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:51:25.660272  405191 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 10:51:25.660784  405191 out.go:368] Setting JSON to false
	I1206 10:51:25.661671  405191 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9237,"bootTime":1765009049,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 10:51:25.661825  405191 start.go:143] virtualization:  
	I1206 10:51:25.665170  405191 out.go:179] * [functional-196950] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:51:25.668974  405191 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 10:51:25.669057  405191 notify.go:221] Checking for updates...
	I1206 10:51:25.674658  405191 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:51:25.677504  405191 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 10:51:25.680242  405191 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-362985/.minikube
	I1206 10:51:25.683061  405191 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:51:25.685807  405191 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:51:25.689056  405191 config.go:182] Loaded profile config "functional-196950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:51:25.689150  405191 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:51:25.719603  405191 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:51:25.719706  405191 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:51:25.776170  405191 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-06 10:51:25.766414658 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:51:25.776279  405191 docker.go:319] overlay module found
	I1206 10:51:25.779319  405191 out.go:179] * Using the docker driver based on existing profile
	I1206 10:51:25.782157  405191 start.go:309] selected driver: docker
	I1206 10:51:25.782168  405191 start.go:927] validating driver "docker" against &{Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:51:25.782268  405191 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:51:25.782379  405191 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:51:25.843232  405191 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-06 10:51:25.834027648 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:51:25.843742  405191 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 10:51:25.843762  405191 cni.go:84] Creating CNI manager for ""
	I1206 10:51:25.843817  405191 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:51:25.843868  405191 start.go:353] cluster config:
	{Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:51:25.846980  405191 out.go:179] * Starting "functional-196950" primary control-plane node in "functional-196950" cluster
	I1206 10:51:25.849840  405191 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 10:51:25.852721  405191 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:51:25.855512  405191 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 10:51:25.855549  405191 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1206 10:51:25.855557  405191 cache.go:65] Caching tarball of preloaded images
	I1206 10:51:25.855585  405191 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:51:25.855649  405191 preload.go:238] Found /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1206 10:51:25.855670  405191 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1206 10:51:25.855775  405191 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/config.json ...
	I1206 10:51:25.875281  405191 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 10:51:25.875292  405191 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 10:51:25.875312  405191 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:51:25.875342  405191 start.go:360] acquireMachinesLock for functional-196950: {Name:mkd2471f275d1d2a438cb4ce89f1d1521a0fb340 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:51:25.875462  405191 start.go:364] duration metric: took 100.145µs to acquireMachinesLock for "functional-196950"
	I1206 10:51:25.875483  405191 start.go:96] Skipping create...Using existing machine configuration
	I1206 10:51:25.875487  405191 fix.go:54] fixHost starting: 
	I1206 10:51:25.875763  405191 cli_runner.go:164] Run: docker container inspect functional-196950 --format={{.State.Status}}
	I1206 10:51:25.893454  405191 fix.go:112] recreateIfNeeded on functional-196950: state=Running err=<nil>
	W1206 10:51:25.893482  405191 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 10:51:25.896578  405191 out.go:252] * Updating the running docker "functional-196950" container ...
	I1206 10:51:25.896608  405191 machine.go:94] provisionDockerMachine start ...
	I1206 10:51:25.896697  405191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:51:25.913940  405191 main.go:143] libmachine: Using SSH client type: native
	I1206 10:51:25.914320  405191 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1206 10:51:25.914327  405191 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:51:26.075155  405191 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-196950
	
	I1206 10:51:26.075169  405191 ubuntu.go:182] provisioning hostname "functional-196950"
	I1206 10:51:26.075252  405191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:51:26.094744  405191 main.go:143] libmachine: Using SSH client type: native
	I1206 10:51:26.095070  405191 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1206 10:51:26.095080  405191 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-196950 && echo "functional-196950" | sudo tee /etc/hostname
	I1206 10:51:26.261114  405191 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-196950
	
	I1206 10:51:26.261197  405191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:51:26.279848  405191 main.go:143] libmachine: Using SSH client type: native
	I1206 10:51:26.280166  405191 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1206 10:51:26.280180  405191 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-196950' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-196950/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-196950' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:51:26.431933  405191 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:51:26.431953  405191 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-362985/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-362985/.minikube}
	I1206 10:51:26.431971  405191 ubuntu.go:190] setting up certificates
	I1206 10:51:26.431995  405191 provision.go:84] configureAuth start
	I1206 10:51:26.432056  405191 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-196950
	I1206 10:51:26.450343  405191 provision.go:143] copyHostCerts
	I1206 10:51:26.450415  405191 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem, removing ...
	I1206 10:51:26.450432  405191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem
	I1206 10:51:26.450505  405191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem (1082 bytes)
	I1206 10:51:26.450607  405191 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem, removing ...
	I1206 10:51:26.450611  405191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem
	I1206 10:51:26.450636  405191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem (1123 bytes)
	I1206 10:51:26.450689  405191 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem, removing ...
	I1206 10:51:26.450693  405191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem
	I1206 10:51:26.450714  405191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem (1679 bytes)
	I1206 10:51:26.450755  405191 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem org=jenkins.functional-196950 san=[127.0.0.1 192.168.49.2 functional-196950 localhost minikube]
	I1206 10:51:26.540911  405191 provision.go:177] copyRemoteCerts
	I1206 10:51:26.540967  405191 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:51:26.541011  405191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:51:26.559000  405191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:51:26.664415  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 10:51:26.682850  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 10:51:26.700635  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 10:51:26.720260  405191 provision.go:87] duration metric: took 288.251554ms to configureAuth
	I1206 10:51:26.720277  405191 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:51:26.720482  405191 config.go:182] Loaded profile config "functional-196950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:51:26.720577  405191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:51:26.740294  405191 main.go:143] libmachine: Using SSH client type: native
	I1206 10:51:26.740607  405191 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1206 10:51:26.740618  405191 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 10:51:27.107160  405191 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 10:51:27.107175  405191 machine.go:97] duration metric: took 1.210560762s to provisionDockerMachine
	I1206 10:51:27.107185  405191 start.go:293] postStartSetup for "functional-196950" (driver="docker")
	I1206 10:51:27.107196  405191 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:51:27.107253  405191 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:51:27.107294  405191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:51:27.129039  405191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:51:27.236148  405191 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:51:27.240016  405191 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:51:27.240036  405191 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:51:27.240047  405191 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-362985/.minikube/addons for local assets ...
	I1206 10:51:27.240125  405191 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-362985/.minikube/files for local assets ...
	I1206 10:51:27.240216  405191 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem -> 3648552.pem in /etc/ssl/certs
	I1206 10:51:27.240311  405191 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/test/nested/copy/364855/hosts -> hosts in /etc/test/nested/copy/364855
	I1206 10:51:27.240389  405191 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/364855
	I1206 10:51:27.248525  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem --> /etc/ssl/certs/3648552.pem (1708 bytes)
	I1206 10:51:27.267246  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/test/nested/copy/364855/hosts --> /etc/test/nested/copy/364855/hosts (40 bytes)
	I1206 10:51:27.285080  405191 start.go:296] duration metric: took 177.880099ms for postStartSetup
	I1206 10:51:27.285152  405191 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:51:27.285189  405191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:51:27.302563  405191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:51:27.404400  405191 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:51:27.408968  405191 fix.go:56] duration metric: took 1.533473357s for fixHost
	I1206 10:51:27.408984  405191 start.go:83] releasing machines lock for "functional-196950", held for 1.533513702s
	I1206 10:51:27.409052  405191 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-196950
	I1206 10:51:27.427444  405191 ssh_runner.go:195] Run: cat /version.json
	I1206 10:51:27.427475  405191 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:51:27.427488  405191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:51:27.427532  405191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:51:27.449136  405191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:51:27.450292  405191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:51:27.555364  405191 ssh_runner.go:195] Run: systemctl --version
	I1206 10:51:27.645936  405191 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 10:51:27.683240  405191 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 10:51:27.687562  405191 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:51:27.687626  405191 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:51:27.695460  405191 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 10:51:27.695474  405191 start.go:496] detecting cgroup driver to use...
	I1206 10:51:27.695505  405191 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:51:27.695551  405191 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 10:51:27.711018  405191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 10:51:27.724651  405191 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:51:27.724707  405191 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:51:27.740806  405191 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:51:27.754100  405191 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:51:27.883046  405191 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:51:28.013378  405191 docker.go:234] disabling docker service ...
	I1206 10:51:28.013440  405191 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:51:28.030310  405191 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:51:28.044424  405191 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:51:28.162200  405191 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:51:28.315775  405191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:51:28.333888  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:51:28.350625  405191 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 10:51:28.350700  405191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:51:28.360184  405191 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 10:51:28.360243  405191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:51:28.369224  405191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:51:28.378656  405191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:51:28.387862  405191 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:51:28.396244  405191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:51:28.405446  405191 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:51:28.414057  405191 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:51:28.423226  405191 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:51:28.430865  405191 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:51:28.438644  405191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:51:28.553737  405191 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 10:51:28.722710  405191 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 10:51:28.722782  405191 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 10:51:28.727796  405191 start.go:564] Will wait 60s for crictl version
	I1206 10:51:28.727854  405191 ssh_runner.go:195] Run: which crictl
	I1206 10:51:28.731603  405191 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:51:28.757634  405191 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 10:51:28.757708  405191 ssh_runner.go:195] Run: crio --version
	I1206 10:51:28.786864  405191 ssh_runner.go:195] Run: crio --version
	I1206 10:51:28.819624  405191 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1206 10:51:28.822438  405191 cli_runner.go:164] Run: docker network inspect functional-196950 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:51:28.838919  405191 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 10:51:28.845850  405191 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1206 10:51:28.848840  405191 kubeadm.go:884] updating cluster {Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:51:28.848980  405191 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 10:51:28.849059  405191 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:51:28.884770  405191 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:51:28.884782  405191 crio.go:433] Images already preloaded, skipping extraction
	I1206 10:51:28.884839  405191 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:51:28.911560  405191 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:51:28.911574  405191 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:51:28.911581  405191 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1206 10:51:28.911685  405191 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-196950 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:51:28.911771  405191 ssh_runner.go:195] Run: crio config
	I1206 10:51:28.966566  405191 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1206 10:51:28.966595  405191 cni.go:84] Creating CNI manager for ""
	I1206 10:51:28.966604  405191 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:51:28.966619  405191 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 10:51:28.966641  405191 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-196950 NodeName:functional-196950 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:51:28.966791  405191 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-196950"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:51:28.966870  405191 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 10:51:28.978798  405191 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:51:28.978872  405191 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:51:28.987304  405191 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1206 10:51:29.001847  405191 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 10:51:29.017577  405191 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1206 10:51:29.031751  405191 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:51:29.036513  405191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:51:29.155805  405191 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:51:29.722153  405191 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950 for IP: 192.168.49.2
	I1206 10:51:29.722163  405191 certs.go:195] generating shared ca certs ...
	I1206 10:51:29.722178  405191 certs.go:227] acquiring lock for ca certs: {Name:mke2ec61a37b6f3abbcbeb9abd23d6a19d011dd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:51:29.722312  405191 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.key
	I1206 10:51:29.722350  405191 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.key
	I1206 10:51:29.722357  405191 certs.go:257] generating profile certs ...
	I1206 10:51:29.722458  405191 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.key
	I1206 10:51:29.722506  405191 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.key.a77b39a6
	I1206 10:51:29.722550  405191 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.key
	I1206 10:51:29.722659  405191 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855.pem (1338 bytes)
	W1206 10:51:29.722686  405191 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855_empty.pem, impossibly tiny 0 bytes
	I1206 10:51:29.722693  405191 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 10:51:29.722721  405191 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem (1082 bytes)
	I1206 10:51:29.722747  405191 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:51:29.722776  405191 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem (1679 bytes)
	I1206 10:51:29.722816  405191 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem (1708 bytes)
	I1206 10:51:29.723422  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:51:29.745118  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 10:51:29.764772  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:51:29.783979  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 10:51:29.803249  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 10:51:29.821820  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 10:51:29.840052  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:51:29.858172  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 10:51:29.876447  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:51:29.894619  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855.pem --> /usr/share/ca-certificates/364855.pem (1338 bytes)
	I1206 10:51:29.912710  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem --> /usr/share/ca-certificates/3648552.pem (1708 bytes)
	I1206 10:51:29.930993  405191 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:51:29.944776  405191 ssh_runner.go:195] Run: openssl version
	I1206 10:51:29.951232  405191 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:51:29.958913  405191 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:51:29.966922  405191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:51:29.970672  405191 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:26 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:51:29.970730  405191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:51:30.016305  405191 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:51:30.031889  405191 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364855.pem
	I1206 10:51:30.048455  405191 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364855.pem /etc/ssl/certs/364855.pem
	I1206 10:51:30.063564  405191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364855.pem
	I1206 10:51:30.076207  405191 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:36 /usr/share/ca-certificates/364855.pem
	I1206 10:51:30.076271  405191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364855.pem
	I1206 10:51:30.128156  405191 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 10:51:30.136853  405191 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648552.pem
	I1206 10:51:30.146061  405191 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648552.pem /etc/ssl/certs/3648552.pem
	I1206 10:51:30.154785  405191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648552.pem
	I1206 10:51:30.159209  405191 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:36 /usr/share/ca-certificates/3648552.pem
	I1206 10:51:30.159296  405191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648552.pem
	I1206 10:51:30.202450  405191 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 10:51:30.210421  405191 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:51:30.214689  405191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 10:51:30.257294  405191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 10:51:30.301161  405191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 10:51:30.342552  405191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 10:51:30.384443  405191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 10:51:30.426153  405191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 10:51:30.467193  405191 kubeadm.go:401] StartCluster: {Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:51:30.467269  405191 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:51:30.467336  405191 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:51:30.505294  405191 cri.go:89] found id: ""
	I1206 10:51:30.505356  405191 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:51:30.514317  405191 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 10:51:30.514327  405191 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 10:51:30.514378  405191 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 10:51:30.522953  405191 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:51:30.523619  405191 kubeconfig.go:125] found "functional-196950" server: "https://192.168.49.2:8441"
	I1206 10:51:30.525284  405191 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 10:51:30.535655  405191 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-06 10:36:53.608460602 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-06 10:51:29.025529796 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1206 10:51:30.535667  405191 kubeadm.go:1161] stopping kube-system containers ...
	I1206 10:51:30.535679  405191 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1206 10:51:30.535750  405191 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:51:30.563289  405191 cri.go:89] found id: ""
	I1206 10:51:30.563367  405191 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 10:51:30.577669  405191 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:51:30.585599  405191 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec  6 10:40 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  6 10:40 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  6 10:40 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec  6 10:40 /etc/kubernetes/scheduler.conf
	
	I1206 10:51:30.585661  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:51:30.593607  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:51:30.601561  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:51:30.601615  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:51:30.609082  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:51:30.616706  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:51:30.616764  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:51:30.624576  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:51:30.632333  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:51:30.632396  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:51:30.640022  405191 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 10:51:30.648015  405191 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:51:30.694279  405191 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:51:31.789747  405191 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.095443049s)
	I1206 10:51:31.789807  405191 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:51:31.992373  405191 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:51:32.066243  405191 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:51:32.115098  405191 api_server.go:52] waiting for apiserver process to appear ...
	I1206 10:51:32.115193  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:32.616025  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:33.115328  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:33.615777  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:34.116234  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:34.616203  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:35.115628  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:35.616081  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:36.116020  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:36.616269  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:37.115484  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:37.615419  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:38.115405  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:38.615272  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:39.115398  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:39.615355  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:40.115498  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:40.615726  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:41.116068  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:41.615318  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:42.116188  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:42.615408  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:43.116174  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:43.616150  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:44.115863  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:44.616112  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:45.115433  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:45.615358  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:46.115254  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:46.615554  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:47.116219  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:47.615907  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:48.115484  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:48.615750  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:49.115717  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:49.615630  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:50.115975  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:50.615777  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:51.116004  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:51.615732  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:52.115255  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:52.616222  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:53.115944  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:53.616128  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:54.115370  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:54.616204  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:55.116093  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:55.616070  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:56.116312  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:56.616205  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:57.116056  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:57.616042  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:58.116102  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:58.616065  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:59.115989  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:59.615683  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:00.115492  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:00.616604  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:01.115972  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:01.615689  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:02.116000  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:02.615289  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:03.116299  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:03.615451  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:04.115353  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:04.615302  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:05.115836  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:05.616105  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:06.115987  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:06.615950  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:07.116145  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:07.615538  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:08.115408  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:08.616204  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:09.116054  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:09.615547  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:10.115395  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:10.616209  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:11.115978  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:11.616260  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:12.115320  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:12.616287  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:13.115459  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:13.615480  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:14.116026  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:14.615286  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:15.116132  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:15.615307  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:16.116269  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:16.616317  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:17.115290  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:17.615531  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:18.115402  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:18.615434  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:19.115328  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:19.615503  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:20.115398  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:20.616128  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:21.115363  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:21.615736  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:22.115418  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:22.616278  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:23.115418  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:23.616297  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:24.115428  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:24.615397  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:25.115674  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:25.615431  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:26.116295  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:26.615282  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:27.115737  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:27.615537  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:28.115556  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:28.615304  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:29.115439  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:29.615331  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:30.116125  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:30.615932  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:31.115423  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:31.616201  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:32.116069  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:32.116145  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:32.141390  405191 cri.go:89] found id: ""
	I1206 10:52:32.141404  405191 logs.go:282] 0 containers: []
	W1206 10:52:32.141411  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:32.141416  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:32.141473  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:32.166484  405191 cri.go:89] found id: ""
	I1206 10:52:32.166497  405191 logs.go:282] 0 containers: []
	W1206 10:52:32.166504  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:32.166509  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:32.166565  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:32.194996  405191 cri.go:89] found id: ""
	I1206 10:52:32.195009  405191 logs.go:282] 0 containers: []
	W1206 10:52:32.195016  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:32.195021  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:32.195076  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:32.221300  405191 cri.go:89] found id: ""
	I1206 10:52:32.221313  405191 logs.go:282] 0 containers: []
	W1206 10:52:32.221321  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:32.221326  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:32.221382  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:32.247157  405191 cri.go:89] found id: ""
	I1206 10:52:32.247171  405191 logs.go:282] 0 containers: []
	W1206 10:52:32.247178  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:32.247201  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:32.247261  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:32.272996  405191 cri.go:89] found id: ""
	I1206 10:52:32.273011  405191 logs.go:282] 0 containers: []
	W1206 10:52:32.273018  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:32.273023  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:32.273087  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:32.298872  405191 cri.go:89] found id: ""
	I1206 10:52:32.298885  405191 logs.go:282] 0 containers: []
	W1206 10:52:32.298892  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:32.298899  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:32.298909  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:32.365036  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:32.365056  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:32.380152  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:32.380168  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:32.448480  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:32.439513   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:32.440191   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:32.441917   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:32.442441   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:32.444184   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:32.439513   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:32.440191   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:32.441917   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:32.442441   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:32.444184   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:32.448508  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:32.448519  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:32.521363  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:32.521385  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:52:35.051557  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:35.061829  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:35.061887  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:35.090093  405191 cri.go:89] found id: ""
	I1206 10:52:35.090109  405191 logs.go:282] 0 containers: []
	W1206 10:52:35.090116  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:35.090123  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:35.090185  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:35.120692  405191 cri.go:89] found id: ""
	I1206 10:52:35.120706  405191 logs.go:282] 0 containers: []
	W1206 10:52:35.120713  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:35.120718  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:35.120781  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:35.150871  405191 cri.go:89] found id: ""
	I1206 10:52:35.150885  405191 logs.go:282] 0 containers: []
	W1206 10:52:35.150895  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:35.150901  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:35.150966  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:35.178176  405191 cri.go:89] found id: ""
	I1206 10:52:35.178189  405191 logs.go:282] 0 containers: []
	W1206 10:52:35.178196  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:35.178201  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:35.178259  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:35.203836  405191 cri.go:89] found id: ""
	I1206 10:52:35.203851  405191 logs.go:282] 0 containers: []
	W1206 10:52:35.203858  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:35.203864  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:35.203922  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:35.229838  405191 cri.go:89] found id: ""
	I1206 10:52:35.229852  405191 logs.go:282] 0 containers: []
	W1206 10:52:35.229860  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:35.229865  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:35.229923  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:35.255728  405191 cri.go:89] found id: ""
	I1206 10:52:35.255742  405191 logs.go:282] 0 containers: []
	W1206 10:52:35.255749  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:35.255763  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:35.255774  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:35.326293  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:35.326313  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:35.341587  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:35.341603  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:35.406128  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:35.396962   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:35.397407   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:35.399334   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:35.399842   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:35.401729   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:35.396962   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:35.397407   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:35.399334   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:35.399842   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:35.401729   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:35.406138  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:35.406148  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:35.477539  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:35.477561  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:52:38.012461  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:38.026662  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:38.026746  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:38.057501  405191 cri.go:89] found id: ""
	I1206 10:52:38.057514  405191 logs.go:282] 0 containers: []
	W1206 10:52:38.057522  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:38.057527  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:38.057597  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:38.087721  405191 cri.go:89] found id: ""
	I1206 10:52:38.087736  405191 logs.go:282] 0 containers: []
	W1206 10:52:38.087744  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:38.087750  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:38.087812  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:38.115539  405191 cri.go:89] found id: ""
	I1206 10:52:38.115553  405191 logs.go:282] 0 containers: []
	W1206 10:52:38.115560  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:38.115566  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:38.115624  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:38.140812  405191 cri.go:89] found id: ""
	I1206 10:52:38.140826  405191 logs.go:282] 0 containers: []
	W1206 10:52:38.140833  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:38.140838  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:38.140896  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:38.166576  405191 cri.go:89] found id: ""
	I1206 10:52:38.166590  405191 logs.go:282] 0 containers: []
	W1206 10:52:38.166597  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:38.166602  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:38.166662  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:38.191851  405191 cri.go:89] found id: ""
	I1206 10:52:38.191864  405191 logs.go:282] 0 containers: []
	W1206 10:52:38.191871  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:38.191876  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:38.191933  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:38.217461  405191 cri.go:89] found id: ""
	I1206 10:52:38.217475  405191 logs.go:282] 0 containers: []
	W1206 10:52:38.217482  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:38.217490  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:38.217502  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:38.232449  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:38.232465  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:38.295220  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:38.286931   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:38.287615   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:38.289268   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:38.289707   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:38.291283   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:38.286931   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:38.287615   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:38.289268   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:38.289707   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:38.291283   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:38.295242  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:38.295255  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:38.363789  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:38.363809  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:52:38.393298  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:38.393313  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:40.963508  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:40.975400  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:40.975471  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:41.012381  405191 cri.go:89] found id: ""
	I1206 10:52:41.012396  405191 logs.go:282] 0 containers: []
	W1206 10:52:41.012403  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:41.012409  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:41.012481  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:41.045820  405191 cri.go:89] found id: ""
	I1206 10:52:41.045833  405191 logs.go:282] 0 containers: []
	W1206 10:52:41.045840  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:41.045845  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:41.045905  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:41.072220  405191 cri.go:89] found id: ""
	I1206 10:52:41.072234  405191 logs.go:282] 0 containers: []
	W1206 10:52:41.072241  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:41.072246  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:41.072315  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:41.099263  405191 cri.go:89] found id: ""
	I1206 10:52:41.099289  405191 logs.go:282] 0 containers: []
	W1206 10:52:41.099297  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:41.099302  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:41.099400  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:41.125321  405191 cri.go:89] found id: ""
	I1206 10:52:41.125335  405191 logs.go:282] 0 containers: []
	W1206 10:52:41.125342  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:41.125347  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:41.125407  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:41.151976  405191 cri.go:89] found id: ""
	I1206 10:52:41.151991  405191 logs.go:282] 0 containers: []
	W1206 10:52:41.151998  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:41.152004  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:41.152071  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:41.182220  405191 cri.go:89] found id: ""
	I1206 10:52:41.182246  405191 logs.go:282] 0 containers: []
	W1206 10:52:41.182254  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:41.182262  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:41.182276  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:41.248526  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:41.239066   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:41.239904   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:41.241505   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:41.242002   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:41.243768   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:41.239066   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:41.239904   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:41.241505   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:41.242002   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:41.243768   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:41.248580  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:41.248592  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:41.318224  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:41.318245  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:52:41.351350  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:41.351366  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:41.419147  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:41.419175  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:43.934479  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:43.945219  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:43.945319  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:43.977434  405191 cri.go:89] found id: ""
	I1206 10:52:43.977447  405191 logs.go:282] 0 containers: []
	W1206 10:52:43.977455  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:43.977460  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:43.977521  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:44.023455  405191 cri.go:89] found id: ""
	I1206 10:52:44.023469  405191 logs.go:282] 0 containers: []
	W1206 10:52:44.023476  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:44.023481  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:44.023547  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:44.054515  405191 cri.go:89] found id: ""
	I1206 10:52:44.054528  405191 logs.go:282] 0 containers: []
	W1206 10:52:44.054535  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:44.054542  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:44.054606  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:44.081078  405191 cri.go:89] found id: ""
	I1206 10:52:44.081092  405191 logs.go:282] 0 containers: []
	W1206 10:52:44.081100  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:44.081105  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:44.081169  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:44.107423  405191 cri.go:89] found id: ""
	I1206 10:52:44.107437  405191 logs.go:282] 0 containers: []
	W1206 10:52:44.107451  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:44.107456  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:44.107514  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:44.134813  405191 cri.go:89] found id: ""
	I1206 10:52:44.134827  405191 logs.go:282] 0 containers: []
	W1206 10:52:44.134834  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:44.134839  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:44.134901  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:44.160796  405191 cri.go:89] found id: ""
	I1206 10:52:44.160816  405191 logs.go:282] 0 containers: []
	W1206 10:52:44.160824  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:44.160831  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:44.160842  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:52:44.190778  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:44.190796  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:44.257562  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:44.257581  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:44.272647  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:44.272663  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:44.338023  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:44.329392   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:44.330156   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:44.331823   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:44.332332   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:44.333956   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:44.329392   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:44.330156   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:44.331823   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:44.332332   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:44.333956   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:44.338033  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:44.338043  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:46.906964  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:46.917503  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:46.917559  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:46.949168  405191 cri.go:89] found id: ""
	I1206 10:52:46.949182  405191 logs.go:282] 0 containers: []
	W1206 10:52:46.949189  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:46.949194  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:46.949253  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:46.981111  405191 cri.go:89] found id: ""
	I1206 10:52:46.981124  405191 logs.go:282] 0 containers: []
	W1206 10:52:46.981131  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:46.981136  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:46.981196  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:47.022951  405191 cri.go:89] found id: ""
	I1206 10:52:47.022965  405191 logs.go:282] 0 containers: []
	W1206 10:52:47.022972  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:47.022977  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:47.023037  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:47.052856  405191 cri.go:89] found id: ""
	I1206 10:52:47.052870  405191 logs.go:282] 0 containers: []
	W1206 10:52:47.052886  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:47.052891  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:47.052967  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:47.083787  405191 cri.go:89] found id: ""
	I1206 10:52:47.083800  405191 logs.go:282] 0 containers: []
	W1206 10:52:47.083807  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:47.083813  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:47.083870  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:47.109033  405191 cri.go:89] found id: ""
	I1206 10:52:47.109046  405191 logs.go:282] 0 containers: []
	W1206 10:52:47.109054  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:47.109059  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:47.109115  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:47.139758  405191 cri.go:89] found id: ""
	I1206 10:52:47.139772  405191 logs.go:282] 0 containers: []
	W1206 10:52:47.139779  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:47.139788  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:47.139798  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:47.154866  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:47.154884  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:47.221813  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:47.213688   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:47.214230   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:47.215830   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:47.216327   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:47.217906   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:47.213688   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:47.214230   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:47.215830   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:47.216327   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:47.217906   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:47.221824  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:47.221835  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:47.290233  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:47.290253  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:52:47.321014  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:47.321036  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:49.890726  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:49.902627  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:49.902688  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:49.929201  405191 cri.go:89] found id: ""
	I1206 10:52:49.929215  405191 logs.go:282] 0 containers: []
	W1206 10:52:49.929224  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:49.929230  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:49.929290  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:49.956185  405191 cri.go:89] found id: ""
	I1206 10:52:49.956198  405191 logs.go:282] 0 containers: []
	W1206 10:52:49.956205  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:49.956210  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:49.956269  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:49.993314  405191 cri.go:89] found id: ""
	I1206 10:52:49.993329  405191 logs.go:282] 0 containers: []
	W1206 10:52:49.993336  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:49.993343  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:49.993403  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:50.037379  405191 cri.go:89] found id: ""
	I1206 10:52:50.037395  405191 logs.go:282] 0 containers: []
	W1206 10:52:50.037403  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:50.037409  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:50.037472  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:50.067336  405191 cri.go:89] found id: ""
	I1206 10:52:50.067351  405191 logs.go:282] 0 containers: []
	W1206 10:52:50.067358  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:50.067363  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:50.067469  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:50.094997  405191 cri.go:89] found id: ""
	I1206 10:52:50.095010  405191 logs.go:282] 0 containers: []
	W1206 10:52:50.095018  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:50.095023  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:50.095087  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:50.122233  405191 cri.go:89] found id: ""
	I1206 10:52:50.122247  405191 logs.go:282] 0 containers: []
	W1206 10:52:50.122254  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:50.122262  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:50.122274  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:50.137790  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:50.137811  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:50.201020  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:50.192768   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:50.193599   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:50.195170   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:50.195719   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:50.197320   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:50.192768   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:50.193599   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:50.195170   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:50.195719   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:50.197320   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:50.201031  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:50.201041  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:50.275122  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:50.275142  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:52:50.303756  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:50.303777  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:52.872285  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:52.882349  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:52.882406  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:52.911618  405191 cri.go:89] found id: ""
	I1206 10:52:52.911631  405191 logs.go:282] 0 containers: []
	W1206 10:52:52.911638  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:52.911644  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:52.911705  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:52.937062  405191 cri.go:89] found id: ""
	I1206 10:52:52.937077  405191 logs.go:282] 0 containers: []
	W1206 10:52:52.937084  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:52.937089  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:52.937149  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:52.963326  405191 cri.go:89] found id: ""
	I1206 10:52:52.963340  405191 logs.go:282] 0 containers: []
	W1206 10:52:52.963347  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:52.963352  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:52.963437  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:52.997061  405191 cri.go:89] found id: ""
	I1206 10:52:52.997074  405191 logs.go:282] 0 containers: []
	W1206 10:52:52.997081  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:52.997086  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:52.997149  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:53.035456  405191 cri.go:89] found id: ""
	I1206 10:52:53.035469  405191 logs.go:282] 0 containers: []
	W1206 10:52:53.035477  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:53.035483  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:53.035543  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:53.063687  405191 cri.go:89] found id: ""
	I1206 10:52:53.063700  405191 logs.go:282] 0 containers: []
	W1206 10:52:53.063707  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:53.063712  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:53.063770  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:53.089131  405191 cri.go:89] found id: ""
	I1206 10:52:53.089145  405191 logs.go:282] 0 containers: []
	W1206 10:52:53.089152  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:53.089161  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:53.089180  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:53.154130  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:53.145768   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:53.146202   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:53.147939   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:53.148440   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:53.150128   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:53.145768   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:53.146202   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:53.147939   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:53.148440   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:53.150128   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:53.154142  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:53.154153  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:53.226211  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:53.226231  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:52:53.255876  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:53.255893  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:53.328864  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:53.328884  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:55.844855  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:55.855173  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:55.855232  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:55.882003  405191 cri.go:89] found id: ""
	I1206 10:52:55.882016  405191 logs.go:282] 0 containers: []
	W1206 10:52:55.882037  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:55.882043  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:55.882102  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:55.906679  405191 cri.go:89] found id: ""
	I1206 10:52:55.906693  405191 logs.go:282] 0 containers: []
	W1206 10:52:55.906700  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:55.906705  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:55.906763  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:55.932742  405191 cri.go:89] found id: ""
	I1206 10:52:55.932756  405191 logs.go:282] 0 containers: []
	W1206 10:52:55.932763  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:55.932769  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:55.932830  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:55.959084  405191 cri.go:89] found id: ""
	I1206 10:52:55.959097  405191 logs.go:282] 0 containers: []
	W1206 10:52:55.959104  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:55.959109  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:55.959167  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:56.001438  405191 cri.go:89] found id: ""
	I1206 10:52:56.001453  405191 logs.go:282] 0 containers: []
	W1206 10:52:56.001461  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:56.001467  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:56.001540  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:56.039276  405191 cri.go:89] found id: ""
	I1206 10:52:56.039291  405191 logs.go:282] 0 containers: []
	W1206 10:52:56.039298  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:56.039304  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:56.039368  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:56.074083  405191 cri.go:89] found id: ""
	I1206 10:52:56.074097  405191 logs.go:282] 0 containers: []
	W1206 10:52:56.074104  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:56.074112  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:56.074124  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:56.148294  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:56.148320  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:56.163720  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:56.163740  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:56.231608  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:56.222271   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:56.222910   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:56.224621   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:56.225337   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:56.227055   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:56.222271   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:56.222910   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:56.224621   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:56.225337   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:56.227055   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:56.231633  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:56.231644  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:56.301348  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:56.301373  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:52:58.834132  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:58.844214  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:58.844271  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:58.871604  405191 cri.go:89] found id: ""
	I1206 10:52:58.871618  405191 logs.go:282] 0 containers: []
	W1206 10:52:58.871625  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:58.871630  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:58.871689  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:58.898243  405191 cri.go:89] found id: ""
	I1206 10:52:58.898257  405191 logs.go:282] 0 containers: []
	W1206 10:52:58.898264  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:58.898269  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:58.898325  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:58.921887  405191 cri.go:89] found id: ""
	I1206 10:52:58.921901  405191 logs.go:282] 0 containers: []
	W1206 10:52:58.921907  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:58.921913  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:58.921970  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:58.947546  405191 cri.go:89] found id: ""
	I1206 10:52:58.947563  405191 logs.go:282] 0 containers: []
	W1206 10:52:58.947570  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:58.947575  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:58.947645  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:58.976915  405191 cri.go:89] found id: ""
	I1206 10:52:58.976930  405191 logs.go:282] 0 containers: []
	W1206 10:52:58.976937  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:58.976942  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:58.977005  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:59.013936  405191 cri.go:89] found id: ""
	I1206 10:52:59.013949  405191 logs.go:282] 0 containers: []
	W1206 10:52:59.013956  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:59.013962  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:59.014020  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:59.044670  405191 cri.go:89] found id: ""
	I1206 10:52:59.044683  405191 logs.go:282] 0 containers: []
	W1206 10:52:59.044690  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:59.044698  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:59.044708  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:59.111552  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:59.111571  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:59.125917  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:59.125933  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:59.190341  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:59.182165   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:59.182776   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:59.184355   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:59.184805   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:59.186371   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:59.182165   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:59.182776   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:59.184355   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:59.184805   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:59.186371   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:59.190351  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:59.190362  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:59.258936  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:59.258957  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:01.790777  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:01.802470  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:01.802534  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:01.828331  405191 cri.go:89] found id: ""
	I1206 10:53:01.828345  405191 logs.go:282] 0 containers: []
	W1206 10:53:01.828352  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:01.828357  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:01.828415  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:01.853132  405191 cri.go:89] found id: ""
	I1206 10:53:01.853145  405191 logs.go:282] 0 containers: []
	W1206 10:53:01.853153  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:01.853158  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:01.853218  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:01.879034  405191 cri.go:89] found id: ""
	I1206 10:53:01.879048  405191 logs.go:282] 0 containers: []
	W1206 10:53:01.879055  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:01.879060  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:01.879119  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:01.905079  405191 cri.go:89] found id: ""
	I1206 10:53:01.905094  405191 logs.go:282] 0 containers: []
	W1206 10:53:01.905101  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:01.905106  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:01.905168  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:01.931029  405191 cri.go:89] found id: ""
	I1206 10:53:01.931043  405191 logs.go:282] 0 containers: []
	W1206 10:53:01.931050  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:01.931055  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:01.931115  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:01.958324  405191 cri.go:89] found id: ""
	I1206 10:53:01.958338  405191 logs.go:282] 0 containers: []
	W1206 10:53:01.958345  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:01.958351  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:01.958406  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:01.999570  405191 cri.go:89] found id: ""
	I1206 10:53:01.999583  405191 logs.go:282] 0 containers: []
	W1206 10:53:01.999590  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:01.999598  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:01.999613  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:02.075754  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:02.075775  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:02.091145  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:02.091168  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:02.166018  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:02.151211   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:02.151882   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:02.153563   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:02.154149   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:02.161209   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:02.151211   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:02.151882   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:02.153563   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:02.154149   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:02.161209   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:02.166029  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:02.166041  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:02.236832  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:02.236853  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:04.769770  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:04.780155  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:04.780230  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:04.805785  405191 cri.go:89] found id: ""
	I1206 10:53:04.805799  405191 logs.go:282] 0 containers: []
	W1206 10:53:04.805806  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:04.805811  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:04.805871  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:04.833423  405191 cri.go:89] found id: ""
	I1206 10:53:04.833445  405191 logs.go:282] 0 containers: []
	W1206 10:53:04.833452  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:04.833458  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:04.833523  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:04.859864  405191 cri.go:89] found id: ""
	I1206 10:53:04.859879  405191 logs.go:282] 0 containers: []
	W1206 10:53:04.859888  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:04.859895  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:04.859964  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:04.886417  405191 cri.go:89] found id: ""
	I1206 10:53:04.886431  405191 logs.go:282] 0 containers: []
	W1206 10:53:04.886437  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:04.886443  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:04.886503  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:04.912019  405191 cri.go:89] found id: ""
	I1206 10:53:04.912033  405191 logs.go:282] 0 containers: []
	W1206 10:53:04.912040  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:04.912044  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:04.912104  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:04.941901  405191 cri.go:89] found id: ""
	I1206 10:53:04.941915  405191 logs.go:282] 0 containers: []
	W1206 10:53:04.941922  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:04.941928  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:04.941990  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:04.967316  405191 cri.go:89] found id: ""
	I1206 10:53:04.967330  405191 logs.go:282] 0 containers: []
	W1206 10:53:04.967337  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:04.967344  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:04.967356  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:05.048268  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:05.048290  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:05.064282  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:05.064299  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:05.132111  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:05.123756   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:05.124563   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:05.126211   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:05.126545   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:05.128101   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:05.123756   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:05.124563   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:05.126211   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:05.126545   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:05.128101   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:05.132131  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:05.132142  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:05.202438  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:05.202460  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:07.731737  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:07.742255  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:07.742344  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:07.767645  405191 cri.go:89] found id: ""
	I1206 10:53:07.767659  405191 logs.go:282] 0 containers: []
	W1206 10:53:07.767666  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:07.767671  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:07.767730  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:07.793951  405191 cri.go:89] found id: ""
	I1206 10:53:07.793975  405191 logs.go:282] 0 containers: []
	W1206 10:53:07.793983  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:07.793989  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:07.794055  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:07.819683  405191 cri.go:89] found id: ""
	I1206 10:53:07.819699  405191 logs.go:282] 0 containers: []
	W1206 10:53:07.819705  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:07.819711  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:07.819784  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:07.851523  405191 cri.go:89] found id: ""
	I1206 10:53:07.851537  405191 logs.go:282] 0 containers: []
	W1206 10:53:07.851543  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:07.851549  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:07.851627  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:07.878807  405191 cri.go:89] found id: ""
	I1206 10:53:07.878831  405191 logs.go:282] 0 containers: []
	W1206 10:53:07.878838  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:07.878844  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:07.878915  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:07.911047  405191 cri.go:89] found id: ""
	I1206 10:53:07.911060  405191 logs.go:282] 0 containers: []
	W1206 10:53:07.911078  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:07.911084  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:07.911155  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:07.937042  405191 cri.go:89] found id: ""
	I1206 10:53:07.937064  405191 logs.go:282] 0 containers: []
	W1206 10:53:07.937072  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:07.937080  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:07.937091  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:08.004528  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:08.004551  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:08.026930  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:08.026947  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:08.109064  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:08.100555   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:08.101020   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:08.102569   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:08.102918   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:08.104386   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:08.100555   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:08.101020   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:08.102569   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:08.102918   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:08.104386   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:08.109086  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:08.109096  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:08.177486  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:08.177508  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:10.706543  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:10.717198  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:10.717262  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:10.743532  405191 cri.go:89] found id: ""
	I1206 10:53:10.743545  405191 logs.go:282] 0 containers: []
	W1206 10:53:10.743552  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:10.743557  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:10.743617  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:10.768882  405191 cri.go:89] found id: ""
	I1206 10:53:10.768897  405191 logs.go:282] 0 containers: []
	W1206 10:53:10.768903  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:10.768908  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:10.768966  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:10.798729  405191 cri.go:89] found id: ""
	I1206 10:53:10.798742  405191 logs.go:282] 0 containers: []
	W1206 10:53:10.798751  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:10.798756  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:10.798814  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:10.823956  405191 cri.go:89] found id: ""
	I1206 10:53:10.823971  405191 logs.go:282] 0 containers: []
	W1206 10:53:10.823978  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:10.823984  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:10.824054  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:10.849242  405191 cri.go:89] found id: ""
	I1206 10:53:10.849271  405191 logs.go:282] 0 containers: []
	W1206 10:53:10.849278  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:10.849283  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:10.849351  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:10.876058  405191 cri.go:89] found id: ""
	I1206 10:53:10.876071  405191 logs.go:282] 0 containers: []
	W1206 10:53:10.876078  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:10.876086  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:10.876145  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:10.901170  405191 cri.go:89] found id: ""
	I1206 10:53:10.901184  405191 logs.go:282] 0 containers: []
	W1206 10:53:10.901192  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:10.901199  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:10.901210  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:10.971362  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:10.971388  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:11.005981  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:11.006000  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:11.089894  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:11.089916  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:11.106328  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:11.106365  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:11.174633  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:11.166045   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:11.167001   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:11.168645   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:11.169014   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:11.170539   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:11.166045   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:11.167001   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:11.168645   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:11.169014   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:11.170539   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:13.674898  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:13.689619  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:13.689793  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:13.724853  405191 cri.go:89] found id: ""
	I1206 10:53:13.724867  405191 logs.go:282] 0 containers: []
	W1206 10:53:13.724874  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:13.724880  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:13.724939  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:13.751349  405191 cri.go:89] found id: ""
	I1206 10:53:13.751363  405191 logs.go:282] 0 containers: []
	W1206 10:53:13.751369  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:13.751402  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:13.751488  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:13.778380  405191 cri.go:89] found id: ""
	I1206 10:53:13.778395  405191 logs.go:282] 0 containers: []
	W1206 10:53:13.778402  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:13.778408  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:13.778474  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:13.806068  405191 cri.go:89] found id: ""
	I1206 10:53:13.806081  405191 logs.go:282] 0 containers: []
	W1206 10:53:13.806088  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:13.806093  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:13.806150  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:13.831347  405191 cri.go:89] found id: ""
	I1206 10:53:13.831360  405191 logs.go:282] 0 containers: []
	W1206 10:53:13.831367  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:13.831410  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:13.831494  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:13.856962  405191 cri.go:89] found id: ""
	I1206 10:53:13.856976  405191 logs.go:282] 0 containers: []
	W1206 10:53:13.856983  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:13.856994  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:13.857057  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:13.883227  405191 cri.go:89] found id: ""
	I1206 10:53:13.883241  405191 logs.go:282] 0 containers: []
	W1206 10:53:13.883248  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:13.883256  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:13.883268  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:13.912731  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:13.912749  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:13.981562  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:13.981581  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:13.997805  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:13.997822  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:14.076333  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:14.066553   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:14.067750   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:14.068525   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:14.070431   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:14.071166   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:14.066553   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:14.067750   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:14.068525   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:14.070431   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:14.071166   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:14.076343  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:14.076355  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:16.646007  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:16.656726  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:16.656822  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:16.682515  405191 cri.go:89] found id: ""
	I1206 10:53:16.682529  405191 logs.go:282] 0 containers: []
	W1206 10:53:16.682535  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:16.682541  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:16.682609  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:16.708327  405191 cri.go:89] found id: ""
	I1206 10:53:16.708341  405191 logs.go:282] 0 containers: []
	W1206 10:53:16.708359  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:16.708365  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:16.708433  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:16.744002  405191 cri.go:89] found id: ""
	I1206 10:53:16.744023  405191 logs.go:282] 0 containers: []
	W1206 10:53:16.744032  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:16.744037  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:16.744099  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:16.771487  405191 cri.go:89] found id: ""
	I1206 10:53:16.771501  405191 logs.go:282] 0 containers: []
	W1206 10:53:16.771509  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:16.771514  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:16.771594  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:16.799494  405191 cri.go:89] found id: ""
	I1206 10:53:16.799507  405191 logs.go:282] 0 containers: []
	W1206 10:53:16.799514  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:16.799520  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:16.799595  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:16.825114  405191 cri.go:89] found id: ""
	I1206 10:53:16.825128  405191 logs.go:282] 0 containers: []
	W1206 10:53:16.825135  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:16.825141  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:16.825204  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:16.851277  405191 cri.go:89] found id: ""
	I1206 10:53:16.851304  405191 logs.go:282] 0 containers: []
	W1206 10:53:16.851312  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:16.851319  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:16.851329  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:16.880918  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:16.880935  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:16.946617  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:16.946636  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:16.961739  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:16.961756  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:17.047880  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:17.038809   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:17.039588   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:17.041249   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:17.041748   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:17.043299   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:17.038809   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:17.039588   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:17.041249   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:17.041748   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:17.043299   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:17.047890  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:17.047901  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:19.616855  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:19.627228  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:19.627288  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:19.654067  405191 cri.go:89] found id: ""
	I1206 10:53:19.654081  405191 logs.go:282] 0 containers: []
	W1206 10:53:19.654088  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:19.654093  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:19.654166  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:19.679488  405191 cri.go:89] found id: ""
	I1206 10:53:19.679502  405191 logs.go:282] 0 containers: []
	W1206 10:53:19.679509  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:19.679515  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:19.679573  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:19.706620  405191 cri.go:89] found id: ""
	I1206 10:53:19.706635  405191 logs.go:282] 0 containers: []
	W1206 10:53:19.706642  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:19.706647  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:19.706706  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:19.734381  405191 cri.go:89] found id: ""
	I1206 10:53:19.734395  405191 logs.go:282] 0 containers: []
	W1206 10:53:19.734406  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:19.734412  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:19.734476  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:19.761415  405191 cri.go:89] found id: ""
	I1206 10:53:19.761429  405191 logs.go:282] 0 containers: []
	W1206 10:53:19.761436  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:19.761441  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:19.761502  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:19.787176  405191 cri.go:89] found id: ""
	I1206 10:53:19.787190  405191 logs.go:282] 0 containers: []
	W1206 10:53:19.787203  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:19.787209  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:19.787270  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:19.813067  405191 cri.go:89] found id: ""
	I1206 10:53:19.813081  405191 logs.go:282] 0 containers: []
	W1206 10:53:19.813088  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:19.813096  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:19.813105  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:19.878821  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:19.878840  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:19.894664  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:19.894680  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:19.965061  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:19.956101   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:19.957218   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:19.958973   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:19.959413   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:19.960938   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:19.956101   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:19.957218   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:19.958973   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:19.959413   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:19.960938   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:19.965101  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:19.965111  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:20.038434  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:20.038456  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:22.572942  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:22.583202  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:22.583273  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:22.608534  405191 cri.go:89] found id: ""
	I1206 10:53:22.608548  405191 logs.go:282] 0 containers: []
	W1206 10:53:22.608556  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:22.608561  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:22.608623  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:22.637655  405191 cri.go:89] found id: ""
	I1206 10:53:22.637673  405191 logs.go:282] 0 containers: []
	W1206 10:53:22.637680  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:22.637685  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:22.637748  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:22.666908  405191 cri.go:89] found id: ""
	I1206 10:53:22.666922  405191 logs.go:282] 0 containers: []
	W1206 10:53:22.666929  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:22.666935  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:22.666995  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:22.694611  405191 cri.go:89] found id: ""
	I1206 10:53:22.694625  405191 logs.go:282] 0 containers: []
	W1206 10:53:22.694633  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:22.694638  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:22.694705  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:22.720468  405191 cri.go:89] found id: ""
	I1206 10:53:22.720482  405191 logs.go:282] 0 containers: []
	W1206 10:53:22.720489  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:22.720494  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:22.720551  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:22.750061  405191 cri.go:89] found id: ""
	I1206 10:53:22.750075  405191 logs.go:282] 0 containers: []
	W1206 10:53:22.750082  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:22.750087  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:22.750148  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:22.778201  405191 cri.go:89] found id: ""
	I1206 10:53:22.778216  405191 logs.go:282] 0 containers: []
	W1206 10:53:22.778223  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:22.778230  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:22.778241  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:22.848689  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:22.848710  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:22.878893  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:22.878908  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:22.945043  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:22.945065  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:22.960966  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:22.960982  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:23.041735  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:23.033031   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:23.033838   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:23.035561   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:23.036147   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:23.037681   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:23.033031   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:23.033838   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:23.035561   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:23.036147   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:23.037681   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:25.543429  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:25.553845  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:25.553906  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:25.580411  405191 cri.go:89] found id: ""
	I1206 10:53:25.580427  405191 logs.go:282] 0 containers: []
	W1206 10:53:25.580434  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:25.580439  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:25.580498  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:25.610347  405191 cri.go:89] found id: ""
	I1206 10:53:25.610361  405191 logs.go:282] 0 containers: []
	W1206 10:53:25.610368  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:25.610373  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:25.610430  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:25.637376  405191 cri.go:89] found id: ""
	I1206 10:53:25.637390  405191 logs.go:282] 0 containers: []
	W1206 10:53:25.637398  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:25.637403  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:25.637463  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:25.666544  405191 cri.go:89] found id: ""
	I1206 10:53:25.666558  405191 logs.go:282] 0 containers: []
	W1206 10:53:25.666572  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:25.666577  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:25.666636  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:25.692777  405191 cri.go:89] found id: ""
	I1206 10:53:25.692791  405191 logs.go:282] 0 containers: []
	W1206 10:53:25.692798  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:25.692803  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:25.692865  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:25.721819  405191 cri.go:89] found id: ""
	I1206 10:53:25.721833  405191 logs.go:282] 0 containers: []
	W1206 10:53:25.721841  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:25.721845  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:25.721901  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:25.749420  405191 cri.go:89] found id: ""
	I1206 10:53:25.749435  405191 logs.go:282] 0 containers: []
	W1206 10:53:25.749442  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:25.749450  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:25.749461  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:25.817956  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:25.817979  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:25.847454  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:25.847480  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:25.913445  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:25.913464  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:25.928310  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:25.928326  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:26.010257  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:25.998143   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:25.999802   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:26.001851   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:26.002260   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:26.005429   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:25.998143   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:25.999802   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:26.001851   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:26.002260   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:26.005429   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:28.510540  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:28.521536  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:28.521597  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:28.549848  405191 cri.go:89] found id: ""
	I1206 10:53:28.549862  405191 logs.go:282] 0 containers: []
	W1206 10:53:28.549869  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:28.549880  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:28.549941  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:28.574916  405191 cri.go:89] found id: ""
	I1206 10:53:28.574929  405191 logs.go:282] 0 containers: []
	W1206 10:53:28.574937  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:28.574941  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:28.575001  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:28.603948  405191 cri.go:89] found id: ""
	I1206 10:53:28.603963  405191 logs.go:282] 0 containers: []
	W1206 10:53:28.603971  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:28.603976  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:28.604038  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:28.633100  405191 cri.go:89] found id: ""
	I1206 10:53:28.633114  405191 logs.go:282] 0 containers: []
	W1206 10:53:28.633121  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:28.633127  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:28.633186  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:28.658360  405191 cri.go:89] found id: ""
	I1206 10:53:28.658374  405191 logs.go:282] 0 containers: []
	W1206 10:53:28.658381  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:28.658386  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:28.658450  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:28.686919  405191 cri.go:89] found id: ""
	I1206 10:53:28.686933  405191 logs.go:282] 0 containers: []
	W1206 10:53:28.686949  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:28.686955  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:28.687012  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:28.713970  405191 cri.go:89] found id: ""
	I1206 10:53:28.713984  405191 logs.go:282] 0 containers: []
	W1206 10:53:28.713991  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:28.714001  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:28.714011  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:28.783354  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:28.783415  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:28.799765  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:28.799785  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:28.875190  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:28.865163   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:28.865941   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:28.867993   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:28.868552   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:28.870594   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:28.865163   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:28.865941   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:28.867993   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:28.868552   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:28.870594   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:28.875200  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:28.875211  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:28.947238  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:28.947258  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:31.487136  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:31.497608  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:31.497670  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:31.524319  405191 cri.go:89] found id: ""
	I1206 10:53:31.524333  405191 logs.go:282] 0 containers: []
	W1206 10:53:31.524341  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:31.524347  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:31.524409  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:31.550830  405191 cri.go:89] found id: ""
	I1206 10:53:31.550845  405191 logs.go:282] 0 containers: []
	W1206 10:53:31.550852  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:31.550857  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:31.550925  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:31.577502  405191 cri.go:89] found id: ""
	I1206 10:53:31.577516  405191 logs.go:282] 0 containers: []
	W1206 10:53:31.577523  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:31.577528  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:31.577587  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:31.604074  405191 cri.go:89] found id: ""
	I1206 10:53:31.604088  405191 logs.go:282] 0 containers: []
	W1206 10:53:31.604095  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:31.604100  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:31.604157  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:31.630962  405191 cri.go:89] found id: ""
	I1206 10:53:31.630976  405191 logs.go:282] 0 containers: []
	W1206 10:53:31.630984  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:31.630989  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:31.631053  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:31.656604  405191 cri.go:89] found id: ""
	I1206 10:53:31.656619  405191 logs.go:282] 0 containers: []
	W1206 10:53:31.656626  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:31.656632  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:31.656695  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:31.682731  405191 cri.go:89] found id: ""
	I1206 10:53:31.682745  405191 logs.go:282] 0 containers: []
	W1206 10:53:31.682752  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:31.682760  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:31.682771  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:31.715043  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:31.715059  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:31.780742  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:31.780762  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:31.795393  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:31.795410  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:31.863799  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:31.855344   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:31.856015   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:31.857739   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:31.858189   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:31.859847   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:31.855344   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:31.856015   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:31.857739   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:31.858189   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:31.859847   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:31.863809  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:31.863820  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:34.432706  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:34.442775  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:34.442837  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:34.468444  405191 cri.go:89] found id: ""
	I1206 10:53:34.468458  405191 logs.go:282] 0 containers: []
	W1206 10:53:34.468465  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:34.468471  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:34.468536  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:34.494328  405191 cri.go:89] found id: ""
	I1206 10:53:34.494343  405191 logs.go:282] 0 containers: []
	W1206 10:53:34.494350  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:34.494356  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:34.494418  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:34.527045  405191 cri.go:89] found id: ""
	I1206 10:53:34.527060  405191 logs.go:282] 0 containers: []
	W1206 10:53:34.527068  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:34.527076  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:34.527139  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:34.554315  405191 cri.go:89] found id: ""
	I1206 10:53:34.554328  405191 logs.go:282] 0 containers: []
	W1206 10:53:34.554335  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:34.554340  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:34.554408  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:34.579994  405191 cri.go:89] found id: ""
	I1206 10:53:34.580009  405191 logs.go:282] 0 containers: []
	W1206 10:53:34.580024  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:34.580030  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:34.580093  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:34.608896  405191 cri.go:89] found id: ""
	I1206 10:53:34.608910  405191 logs.go:282] 0 containers: []
	W1206 10:53:34.608917  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:34.608925  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:34.608983  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:34.638506  405191 cri.go:89] found id: ""
	I1206 10:53:34.638521  405191 logs.go:282] 0 containers: []
	W1206 10:53:34.638528  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:34.638536  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:34.638549  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:34.700281  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:34.691941   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:34.692724   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:34.693733   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:34.694312   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:34.696014   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:34.691941   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:34.692724   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:34.693733   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:34.694312   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:34.696014   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:34.700290  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:34.700302  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:34.773019  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:34.773040  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:34.803610  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:34.803628  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:34.870473  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:34.870498  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:37.386935  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:37.397529  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:37.397618  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:37.422529  405191 cri.go:89] found id: ""
	I1206 10:53:37.422543  405191 logs.go:282] 0 containers: []
	W1206 10:53:37.422550  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:37.422556  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:37.422613  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:37.447810  405191 cri.go:89] found id: ""
	I1206 10:53:37.447824  405191 logs.go:282] 0 containers: []
	W1206 10:53:37.447830  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:37.447836  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:37.447895  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:37.473775  405191 cri.go:89] found id: ""
	I1206 10:53:37.473794  405191 logs.go:282] 0 containers: []
	W1206 10:53:37.473801  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:37.473806  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:37.473862  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:37.499349  405191 cri.go:89] found id: ""
	I1206 10:53:37.499362  405191 logs.go:282] 0 containers: []
	W1206 10:53:37.499370  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:37.499400  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:37.499468  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:37.526194  405191 cri.go:89] found id: ""
	I1206 10:53:37.526208  405191 logs.go:282] 0 containers: []
	W1206 10:53:37.526216  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:37.526221  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:37.526286  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:37.552021  405191 cri.go:89] found id: ""
	I1206 10:53:37.552041  405191 logs.go:282] 0 containers: []
	W1206 10:53:37.552049  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:37.552054  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:37.552113  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:37.577455  405191 cri.go:89] found id: ""
	I1206 10:53:37.577469  405191 logs.go:282] 0 containers: []
	W1206 10:53:37.577476  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:37.577484  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:37.577495  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:37.605307  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:37.605324  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:37.674813  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:37.674836  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:37.689252  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:37.689268  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:37.751707  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:37.743090   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:37.743746   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:37.745542   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:37.746167   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:37.747937   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:37.743090   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:37.743746   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:37.745542   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:37.746167   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:37.747937   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:37.751719  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:37.751730  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:40.320654  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:40.331310  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:40.331372  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:40.357691  405191 cri.go:89] found id: ""
	I1206 10:53:40.357706  405191 logs.go:282] 0 containers: []
	W1206 10:53:40.357721  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:40.357726  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:40.357789  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:40.383818  405191 cri.go:89] found id: ""
	I1206 10:53:40.383833  405191 logs.go:282] 0 containers: []
	W1206 10:53:40.383841  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:40.383847  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:40.383904  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:40.412121  405191 cri.go:89] found id: ""
	I1206 10:53:40.412134  405191 logs.go:282] 0 containers: []
	W1206 10:53:40.412141  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:40.412146  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:40.412204  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:40.438527  405191 cri.go:89] found id: ""
	I1206 10:53:40.438542  405191 logs.go:282] 0 containers: []
	W1206 10:53:40.438549  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:40.438554  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:40.438616  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:40.465329  405191 cri.go:89] found id: ""
	I1206 10:53:40.465344  405191 logs.go:282] 0 containers: []
	W1206 10:53:40.465351  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:40.465356  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:40.465420  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:40.491939  405191 cri.go:89] found id: ""
	I1206 10:53:40.491952  405191 logs.go:282] 0 containers: []
	W1206 10:53:40.491960  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:40.491965  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:40.492029  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:40.516801  405191 cri.go:89] found id: ""
	I1206 10:53:40.516821  405191 logs.go:282] 0 containers: []
	W1206 10:53:40.516828  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:40.516836  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:40.516848  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:40.593042  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:40.593062  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:40.608966  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:40.608986  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:40.675818  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:40.665869   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:40.667834   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:40.668210   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:40.669803   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:40.670394   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:40.665869   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:40.667834   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:40.668210   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:40.669803   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:40.670394   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:40.675828  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:40.675841  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:40.744680  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:40.744702  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:43.275550  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:43.285722  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:43.285783  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:43.312235  405191 cri.go:89] found id: ""
	I1206 10:53:43.312249  405191 logs.go:282] 0 containers: []
	W1206 10:53:43.312262  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:43.312278  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:43.312337  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:43.338204  405191 cri.go:89] found id: ""
	I1206 10:53:43.338219  405191 logs.go:282] 0 containers: []
	W1206 10:53:43.338226  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:43.338249  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:43.338321  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:43.363434  405191 cri.go:89] found id: ""
	I1206 10:53:43.363455  405191 logs.go:282] 0 containers: []
	W1206 10:53:43.363463  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:43.363480  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:43.363562  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:43.390724  405191 cri.go:89] found id: ""
	I1206 10:53:43.390738  405191 logs.go:282] 0 containers: []
	W1206 10:53:43.390745  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:43.390750  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:43.390824  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:43.416427  405191 cri.go:89] found id: ""
	I1206 10:53:43.416442  405191 logs.go:282] 0 containers: []
	W1206 10:53:43.416449  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:43.416454  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:43.416511  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:43.446598  405191 cri.go:89] found id: ""
	I1206 10:53:43.446612  405191 logs.go:282] 0 containers: []
	W1206 10:53:43.446619  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:43.446625  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:43.446695  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:43.472759  405191 cri.go:89] found id: ""
	I1206 10:53:43.472773  405191 logs.go:282] 0 containers: []
	W1206 10:53:43.472779  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:43.472787  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:43.472797  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:43.538686  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:43.538706  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:43.553731  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:43.553746  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:43.618535  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:43.609715   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:43.610485   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:43.612195   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:43.612812   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:43.614536   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:43.609715   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:43.610485   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:43.612195   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:43.612812   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:43.614536   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:43.618556  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:43.618570  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:43.690132  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:43.690152  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:46.225047  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:46.236105  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:46.236179  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:46.269036  405191 cri.go:89] found id: ""
	I1206 10:53:46.269066  405191 logs.go:282] 0 containers: []
	W1206 10:53:46.269074  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:46.269079  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:46.269151  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:46.300616  405191 cri.go:89] found id: ""
	I1206 10:53:46.300631  405191 logs.go:282] 0 containers: []
	W1206 10:53:46.300639  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:46.300645  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:46.300707  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:46.330077  405191 cri.go:89] found id: ""
	I1206 10:53:46.330102  405191 logs.go:282] 0 containers: []
	W1206 10:53:46.330110  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:46.330115  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:46.330189  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:46.361893  405191 cri.go:89] found id: ""
	I1206 10:53:46.361908  405191 logs.go:282] 0 containers: []
	W1206 10:53:46.361915  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:46.361920  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:46.361991  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:46.387920  405191 cri.go:89] found id: ""
	I1206 10:53:46.387934  405191 logs.go:282] 0 containers: []
	W1206 10:53:46.387941  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:46.387947  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:46.388006  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:46.415440  405191 cri.go:89] found id: ""
	I1206 10:53:46.415463  405191 logs.go:282] 0 containers: []
	W1206 10:53:46.415470  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:46.415475  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:46.415534  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:46.442198  405191 cri.go:89] found id: ""
	I1206 10:53:46.442211  405191 logs.go:282] 0 containers: []
	W1206 10:53:46.442219  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:46.442226  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:46.442239  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:46.457274  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:46.457290  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:46.520346  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:46.512290   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:46.512824   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:46.514476   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:46.514946   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:46.516438   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:46.512290   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:46.512824   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:46.514476   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:46.514946   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:46.516438   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:46.520388  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:46.520399  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:46.595642  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:46.595673  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:46.626749  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:46.626769  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:49.193445  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:49.203743  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:49.203807  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:49.233557  405191 cri.go:89] found id: ""
	I1206 10:53:49.233571  405191 logs.go:282] 0 containers: []
	W1206 10:53:49.233578  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:49.233583  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:49.233643  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:49.265569  405191 cri.go:89] found id: ""
	I1206 10:53:49.265583  405191 logs.go:282] 0 containers: []
	W1206 10:53:49.265590  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:49.265595  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:49.265651  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:49.296146  405191 cri.go:89] found id: ""
	I1206 10:53:49.296159  405191 logs.go:282] 0 containers: []
	W1206 10:53:49.296166  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:49.296172  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:49.296232  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:49.321471  405191 cri.go:89] found id: ""
	I1206 10:53:49.321485  405191 logs.go:282] 0 containers: []
	W1206 10:53:49.321492  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:49.321498  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:49.321556  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:49.346537  405191 cri.go:89] found id: ""
	I1206 10:53:49.346551  405191 logs.go:282] 0 containers: []
	W1206 10:53:49.346571  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:49.346577  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:49.346693  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:49.372292  405191 cri.go:89] found id: ""
	I1206 10:53:49.372307  405191 logs.go:282] 0 containers: []
	W1206 10:53:49.372314  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:49.372320  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:49.372382  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:49.397395  405191 cri.go:89] found id: ""
	I1206 10:53:49.397408  405191 logs.go:282] 0 containers: []
	W1206 10:53:49.397415  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:49.397422  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:49.397432  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:49.464359  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:49.464378  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:49.479746  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:49.479762  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:49.542949  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:49.534167   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:49.534752   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:49.536598   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:49.537091   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:49.538580   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:49.534167   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:49.534752   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:49.536598   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:49.537091   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:49.538580   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:49.542959  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:49.542969  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:49.612749  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:49.612769  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:52.142276  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:52.152804  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:52.152867  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:52.179560  405191 cri.go:89] found id: ""
	I1206 10:53:52.179575  405191 logs.go:282] 0 containers: []
	W1206 10:53:52.179582  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:52.179587  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:52.179642  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:52.204827  405191 cri.go:89] found id: ""
	I1206 10:53:52.204842  405191 logs.go:282] 0 containers: []
	W1206 10:53:52.204849  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:52.204854  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:52.204917  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:52.250790  405191 cri.go:89] found id: ""
	I1206 10:53:52.250804  405191 logs.go:282] 0 containers: []
	W1206 10:53:52.250811  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:52.250816  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:52.250886  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:52.282140  405191 cri.go:89] found id: ""
	I1206 10:53:52.282153  405191 logs.go:282] 0 containers: []
	W1206 10:53:52.282161  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:52.282166  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:52.282225  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:52.314373  405191 cri.go:89] found id: ""
	I1206 10:53:52.314387  405191 logs.go:282] 0 containers: []
	W1206 10:53:52.314395  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:52.314400  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:52.314471  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:52.339037  405191 cri.go:89] found id: ""
	I1206 10:53:52.339051  405191 logs.go:282] 0 containers: []
	W1206 10:53:52.339058  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:52.339064  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:52.339124  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:52.366113  405191 cri.go:89] found id: ""
	I1206 10:53:52.366127  405191 logs.go:282] 0 containers: []
	W1206 10:53:52.366134  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:52.366142  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:52.366152  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:52.436368  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:52.436388  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:52.451468  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:52.451487  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:52.518739  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:52.509542   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:52.509966   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:52.511603   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:52.511955   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:52.513754   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:52.509542   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:52.509966   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:52.511603   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:52.511955   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:52.513754   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:52.518760  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:52.518777  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:52.593784  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:52.593805  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:55.124735  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:55.135510  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:55.135574  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:55.162613  405191 cri.go:89] found id: ""
	I1206 10:53:55.162626  405191 logs.go:282] 0 containers: []
	W1206 10:53:55.162633  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:55.162638  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:55.162703  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:55.189655  405191 cri.go:89] found id: ""
	I1206 10:53:55.189669  405191 logs.go:282] 0 containers: []
	W1206 10:53:55.189676  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:55.189682  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:55.189786  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:55.215289  405191 cri.go:89] found id: ""
	I1206 10:53:55.215303  405191 logs.go:282] 0 containers: []
	W1206 10:53:55.215310  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:55.215315  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:55.215402  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:55.247890  405191 cri.go:89] found id: ""
	I1206 10:53:55.247913  405191 logs.go:282] 0 containers: []
	W1206 10:53:55.247921  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:55.247926  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:55.247992  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:55.283368  405191 cri.go:89] found id: ""
	I1206 10:53:55.283409  405191 logs.go:282] 0 containers: []
	W1206 10:53:55.283416  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:55.283422  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:55.283516  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:55.310596  405191 cri.go:89] found id: ""
	I1206 10:53:55.310609  405191 logs.go:282] 0 containers: []
	W1206 10:53:55.310627  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:55.310632  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:55.310712  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:55.337361  405191 cri.go:89] found id: ""
	I1206 10:53:55.337374  405191 logs.go:282] 0 containers: []
	W1206 10:53:55.337381  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:55.337389  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:55.337399  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:55.404341  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:55.404361  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:55.419687  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:55.419705  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:55.485498  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:55.476614   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:55.477821   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:55.478840   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:55.479810   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:55.480435   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:55.476614   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:55.477821   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:55.478840   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:55.479810   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:55.480435   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:55.485509  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:55.485522  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:55.555911  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:55.555932  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:58.088179  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:58.099010  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:58.099069  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:58.124686  405191 cri.go:89] found id: ""
	I1206 10:53:58.124700  405191 logs.go:282] 0 containers: []
	W1206 10:53:58.124710  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:58.124716  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:58.124773  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:58.149717  405191 cri.go:89] found id: ""
	I1206 10:53:58.149730  405191 logs.go:282] 0 containers: []
	W1206 10:53:58.149738  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:58.149743  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:58.149800  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:58.177293  405191 cri.go:89] found id: ""
	I1206 10:53:58.177307  405191 logs.go:282] 0 containers: []
	W1206 10:53:58.177314  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:58.177319  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:58.177389  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:58.203540  405191 cri.go:89] found id: ""
	I1206 10:53:58.203554  405191 logs.go:282] 0 containers: []
	W1206 10:53:58.203562  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:58.203567  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:58.203632  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:58.237354  405191 cri.go:89] found id: ""
	I1206 10:53:58.237377  405191 logs.go:282] 0 containers: []
	W1206 10:53:58.237385  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:58.237390  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:58.237459  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:58.269725  405191 cri.go:89] found id: ""
	I1206 10:53:58.269739  405191 logs.go:282] 0 containers: []
	W1206 10:53:58.269746  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:58.269751  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:58.269821  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:58.297406  405191 cri.go:89] found id: ""
	I1206 10:53:58.297420  405191 logs.go:282] 0 containers: []
	W1206 10:53:58.297427  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:58.297435  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:58.297445  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:58.363296  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:58.363319  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:58.379154  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:58.379170  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:58.448306  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:58.438857   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:58.439654   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:58.441442   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:58.441790   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:58.443511   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:58.438857   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:58.439654   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:58.441442   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:58.441790   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:58.443511   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:58.448317  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:58.448331  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:58.518384  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:58.518408  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:01.052183  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:01.062404  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:01.062462  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:01.087509  405191 cri.go:89] found id: ""
	I1206 10:54:01.087523  405191 logs.go:282] 0 containers: []
	W1206 10:54:01.087530  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:01.087536  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:01.087598  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:01.113371  405191 cri.go:89] found id: ""
	I1206 10:54:01.113385  405191 logs.go:282] 0 containers: []
	W1206 10:54:01.113392  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:01.113397  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:01.113456  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:01.140194  405191 cri.go:89] found id: ""
	I1206 10:54:01.140208  405191 logs.go:282] 0 containers: []
	W1206 10:54:01.140214  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:01.140220  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:01.140282  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:01.166431  405191 cri.go:89] found id: ""
	I1206 10:54:01.166445  405191 logs.go:282] 0 containers: []
	W1206 10:54:01.166452  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:01.166460  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:01.166523  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:01.195742  405191 cri.go:89] found id: ""
	I1206 10:54:01.195756  405191 logs.go:282] 0 containers: []
	W1206 10:54:01.195764  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:01.195769  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:01.195835  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:01.228731  405191 cri.go:89] found id: ""
	I1206 10:54:01.228746  405191 logs.go:282] 0 containers: []
	W1206 10:54:01.228753  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:01.228759  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:01.228821  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:01.260175  405191 cri.go:89] found id: ""
	I1206 10:54:01.260189  405191 logs.go:282] 0 containers: []
	W1206 10:54:01.260196  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:01.260204  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:01.260214  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:01.337819  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:01.337839  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:01.353486  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:01.353502  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:01.423278  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:01.414904   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:01.415292   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:01.417033   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:01.417517   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:01.418780   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:01.414904   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:01.415292   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:01.417033   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:01.417517   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:01.418780   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:01.423288  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:01.423299  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:01.492536  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:01.492556  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:04.028526  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:04.039535  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:04.039600  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:04.069150  405191 cri.go:89] found id: ""
	I1206 10:54:04.069164  405191 logs.go:282] 0 containers: []
	W1206 10:54:04.069172  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:04.069177  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:04.069238  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:04.100343  405191 cri.go:89] found id: ""
	I1206 10:54:04.100357  405191 logs.go:282] 0 containers: []
	W1206 10:54:04.100364  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:04.100369  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:04.100431  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:04.127347  405191 cri.go:89] found id: ""
	I1206 10:54:04.127361  405191 logs.go:282] 0 containers: []
	W1206 10:54:04.127368  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:04.127395  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:04.127466  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:04.154542  405191 cri.go:89] found id: ""
	I1206 10:54:04.154557  405191 logs.go:282] 0 containers: []
	W1206 10:54:04.154564  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:04.154569  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:04.154628  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:04.181647  405191 cri.go:89] found id: ""
	I1206 10:54:04.181661  405191 logs.go:282] 0 containers: []
	W1206 10:54:04.181668  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:04.181676  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:04.181739  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:04.210872  405191 cri.go:89] found id: ""
	I1206 10:54:04.210886  405191 logs.go:282] 0 containers: []
	W1206 10:54:04.210893  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:04.210899  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:04.210962  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:04.246454  405191 cri.go:89] found id: ""
	I1206 10:54:04.246468  405191 logs.go:282] 0 containers: []
	W1206 10:54:04.246482  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:04.246490  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:04.246501  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:04.322848  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:04.322872  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:04.338928  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:04.338945  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:04.409905  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:04.400164   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:04.400961   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:04.402781   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:04.403461   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:04.404662   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:04.400164   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:04.400961   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:04.402781   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:04.403461   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:04.404662   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:04.409916  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:04.409928  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:04.480369  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:04.480389  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:07.012345  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:07.022891  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:07.022962  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:07.049835  405191 cri.go:89] found id: ""
	I1206 10:54:07.049849  405191 logs.go:282] 0 containers: []
	W1206 10:54:07.049856  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:07.049861  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:07.049925  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:07.076617  405191 cri.go:89] found id: ""
	I1206 10:54:07.076631  405191 logs.go:282] 0 containers: []
	W1206 10:54:07.076637  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:07.076643  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:07.076704  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:07.103202  405191 cri.go:89] found id: ""
	I1206 10:54:07.103216  405191 logs.go:282] 0 containers: []
	W1206 10:54:07.103223  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:07.103229  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:07.103288  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:07.129964  405191 cri.go:89] found id: ""
	I1206 10:54:07.129977  405191 logs.go:282] 0 containers: []
	W1206 10:54:07.129984  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:07.129989  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:07.130048  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:07.157459  405191 cri.go:89] found id: ""
	I1206 10:54:07.157473  405191 logs.go:282] 0 containers: []
	W1206 10:54:07.157480  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:07.157485  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:07.157551  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:07.183797  405191 cri.go:89] found id: ""
	I1206 10:54:07.183811  405191 logs.go:282] 0 containers: []
	W1206 10:54:07.183818  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:07.183823  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:07.183881  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:07.209675  405191 cri.go:89] found id: ""
	I1206 10:54:07.209689  405191 logs.go:282] 0 containers: []
	W1206 10:54:07.209697  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:07.209704  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:07.209715  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:07.228202  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:07.228225  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:07.312770  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:07.304201   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:07.304672   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:07.306492   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:07.307083   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:07.308768   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:07.304201   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:07.304672   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:07.306492   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:07.307083   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:07.308768   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:07.312782  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:07.312792  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:07.383254  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:07.383275  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:07.414045  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:07.414060  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:09.985551  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:09.995745  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:09.995806  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:10.030868  405191 cri.go:89] found id: ""
	I1206 10:54:10.030884  405191 logs.go:282] 0 containers: []
	W1206 10:54:10.030892  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:10.030898  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:10.030967  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:10.060505  405191 cri.go:89] found id: ""
	I1206 10:54:10.060520  405191 logs.go:282] 0 containers: []
	W1206 10:54:10.060527  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:10.060532  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:10.060596  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:10.087945  405191 cri.go:89] found id: ""
	I1206 10:54:10.087979  405191 logs.go:282] 0 containers: []
	W1206 10:54:10.087986  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:10.087992  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:10.088069  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:10.116434  405191 cri.go:89] found id: ""
	I1206 10:54:10.116448  405191 logs.go:282] 0 containers: []
	W1206 10:54:10.116455  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:10.116461  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:10.116523  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:10.144560  405191 cri.go:89] found id: ""
	I1206 10:54:10.144572  405191 logs.go:282] 0 containers: []
	W1206 10:54:10.144579  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:10.144584  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:10.144645  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:10.173019  405191 cri.go:89] found id: ""
	I1206 10:54:10.173033  405191 logs.go:282] 0 containers: []
	W1206 10:54:10.173040  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:10.173046  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:10.173105  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:10.200809  405191 cri.go:89] found id: ""
	I1206 10:54:10.200823  405191 logs.go:282] 0 containers: []
	W1206 10:54:10.200830  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:10.200837  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:10.200847  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:10.215623  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:10.215642  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:10.300302  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:10.291573   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:10.292121   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:10.293856   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:10.294451   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:10.295989   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:10.291573   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:10.292121   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:10.293856   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:10.294451   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:10.295989   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:10.300314  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:10.300325  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:10.369603  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:10.369624  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:10.402671  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:10.402687  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:12.968162  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:12.978411  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:12.978473  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:13.010579  405191 cri.go:89] found id: ""
	I1206 10:54:13.010593  405191 logs.go:282] 0 containers: []
	W1206 10:54:13.010601  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:13.010606  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:13.010669  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:13.037103  405191 cri.go:89] found id: ""
	I1206 10:54:13.037118  405191 logs.go:282] 0 containers: []
	W1206 10:54:13.037125  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:13.037131  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:13.037199  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:13.063109  405191 cri.go:89] found id: ""
	I1206 10:54:13.063124  405191 logs.go:282] 0 containers: []
	W1206 10:54:13.063131  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:13.063136  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:13.063195  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:13.088780  405191 cri.go:89] found id: ""
	I1206 10:54:13.088794  405191 logs.go:282] 0 containers: []
	W1206 10:54:13.088801  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:13.088806  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:13.088868  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:13.114682  405191 cri.go:89] found id: ""
	I1206 10:54:13.114696  405191 logs.go:282] 0 containers: []
	W1206 10:54:13.114703  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:13.114708  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:13.114952  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:13.141850  405191 cri.go:89] found id: ""
	I1206 10:54:13.141866  405191 logs.go:282] 0 containers: []
	W1206 10:54:13.141873  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:13.141880  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:13.141945  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:13.167958  405191 cri.go:89] found id: ""
	I1206 10:54:13.167975  405191 logs.go:282] 0 containers: []
	W1206 10:54:13.167982  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:13.167990  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:13.168002  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:13.237314  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:13.237335  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:13.254137  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:13.254164  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:13.322226  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:13.313022   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:13.313640   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:13.315145   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:13.315780   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:13.317512   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:13.313022   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:13.313640   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:13.315145   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:13.315780   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:13.317512   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:13.322237  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:13.322248  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:13.394938  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:13.394958  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:15.923162  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:15.933287  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:15.933346  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:15.958679  405191 cri.go:89] found id: ""
	I1206 10:54:15.958694  405191 logs.go:282] 0 containers: []
	W1206 10:54:15.958701  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:15.958706  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:15.958768  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:15.986252  405191 cri.go:89] found id: ""
	I1206 10:54:15.986267  405191 logs.go:282] 0 containers: []
	W1206 10:54:15.986274  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:15.986279  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:15.986339  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:16.015947  405191 cri.go:89] found id: ""
	I1206 10:54:16.015961  405191 logs.go:282] 0 containers: []
	W1206 10:54:16.015968  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:16.015973  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:16.016038  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:16.046583  405191 cri.go:89] found id: ""
	I1206 10:54:16.046597  405191 logs.go:282] 0 containers: []
	W1206 10:54:16.046604  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:16.046609  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:16.046673  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:16.073401  405191 cri.go:89] found id: ""
	I1206 10:54:16.073415  405191 logs.go:282] 0 containers: []
	W1206 10:54:16.073422  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:16.073428  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:16.073489  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:16.099301  405191 cri.go:89] found id: ""
	I1206 10:54:16.099315  405191 logs.go:282] 0 containers: []
	W1206 10:54:16.099321  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:16.099327  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:16.099409  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:16.132045  405191 cri.go:89] found id: ""
	I1206 10:54:16.132060  405191 logs.go:282] 0 containers: []
	W1206 10:54:16.132067  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:16.132075  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:16.132086  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:16.201949  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:16.191660   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:16.193954   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:16.194695   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:16.196370   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:16.196866   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:16.191660   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:16.193954   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:16.194695   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:16.196370   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:16.196866   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:16.201962  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:16.201972  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:16.277750  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:16.277769  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:16.311130  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:16.311148  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:16.377771  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:16.377793  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:18.893108  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:18.903283  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:18.903345  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:18.927861  405191 cri.go:89] found id: ""
	I1206 10:54:18.927875  405191 logs.go:282] 0 containers: []
	W1206 10:54:18.927882  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:18.927887  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:18.927945  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:18.953460  405191 cri.go:89] found id: ""
	I1206 10:54:18.953474  405191 logs.go:282] 0 containers: []
	W1206 10:54:18.953482  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:18.953486  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:18.953563  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:18.981063  405191 cri.go:89] found id: ""
	I1206 10:54:18.981077  405191 logs.go:282] 0 containers: []
	W1206 10:54:18.981088  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:18.981093  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:18.981154  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:19.011134  405191 cri.go:89] found id: ""
	I1206 10:54:19.011148  405191 logs.go:282] 0 containers: []
	W1206 10:54:19.011156  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:19.011161  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:19.011221  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:19.037866  405191 cri.go:89] found id: ""
	I1206 10:54:19.037889  405191 logs.go:282] 0 containers: []
	W1206 10:54:19.037895  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:19.037901  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:19.037972  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:19.067672  405191 cri.go:89] found id: ""
	I1206 10:54:19.067685  405191 logs.go:282] 0 containers: []
	W1206 10:54:19.067692  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:19.067697  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:19.067753  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:19.092891  405191 cri.go:89] found id: ""
	I1206 10:54:19.092906  405191 logs.go:282] 0 containers: []
	W1206 10:54:19.092913  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:19.092921  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:19.092933  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:19.158186  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:19.149512   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:19.150168   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:19.151831   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:19.152364   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:19.154131   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:19.149512   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:19.150168   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:19.151831   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:19.152364   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:19.154131   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:19.158196  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:19.158209  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:19.231681  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:19.231701  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:19.267680  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:19.267704  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:19.341777  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:19.341796  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:21.856895  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:21.867600  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:21.867659  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:21.899562  405191 cri.go:89] found id: ""
	I1206 10:54:21.899576  405191 logs.go:282] 0 containers: []
	W1206 10:54:21.899583  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:21.899589  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:21.899647  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:21.924433  405191 cri.go:89] found id: ""
	I1206 10:54:21.924446  405191 logs.go:282] 0 containers: []
	W1206 10:54:21.924454  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:21.924459  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:21.924517  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:21.949461  405191 cri.go:89] found id: ""
	I1206 10:54:21.949476  405191 logs.go:282] 0 containers: []
	W1206 10:54:21.949482  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:21.949493  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:21.949550  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:21.976373  405191 cri.go:89] found id: ""
	I1206 10:54:21.976388  405191 logs.go:282] 0 containers: []
	W1206 10:54:21.976396  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:21.976401  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:21.976457  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:22.025051  405191 cri.go:89] found id: ""
	I1206 10:54:22.025074  405191 logs.go:282] 0 containers: []
	W1206 10:54:22.025095  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:22.025101  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:22.025214  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:22.054790  405191 cri.go:89] found id: ""
	I1206 10:54:22.054804  405191 logs.go:282] 0 containers: []
	W1206 10:54:22.054811  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:22.054817  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:22.054873  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:22.081220  405191 cri.go:89] found id: ""
	I1206 10:54:22.081235  405191 logs.go:282] 0 containers: []
	W1206 10:54:22.081242  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:22.081251  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:22.081262  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:22.147339  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:22.147359  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:22.162252  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:22.162268  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:22.233807  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:22.219327   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:22.220102   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:22.225452   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:22.227256   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:22.228864   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:22.219327   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:22.220102   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:22.225452   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:22.227256   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:22.228864   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:22.233819  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:22.233838  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:22.312101  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:22.312123  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:24.852672  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:24.863210  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:24.863271  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:24.889673  405191 cri.go:89] found id: ""
	I1206 10:54:24.889687  405191 logs.go:282] 0 containers: []
	W1206 10:54:24.889695  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:24.889700  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:24.889758  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:24.920816  405191 cri.go:89] found id: ""
	I1206 10:54:24.920830  405191 logs.go:282] 0 containers: []
	W1206 10:54:24.920837  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:24.920842  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:24.920900  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:24.945958  405191 cri.go:89] found id: ""
	I1206 10:54:24.945972  405191 logs.go:282] 0 containers: []
	W1206 10:54:24.945980  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:24.945985  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:24.946046  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:24.970886  405191 cri.go:89] found id: ""
	I1206 10:54:24.970900  405191 logs.go:282] 0 containers: []
	W1206 10:54:24.970907  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:24.970912  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:24.970970  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:25.000298  405191 cri.go:89] found id: ""
	I1206 10:54:25.000315  405191 logs.go:282] 0 containers: []
	W1206 10:54:25.000323  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:25.000329  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:25.000399  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:25.033867  405191 cri.go:89] found id: ""
	I1206 10:54:25.033882  405191 logs.go:282] 0 containers: []
	W1206 10:54:25.033890  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:25.033895  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:25.033960  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:25.060149  405191 cri.go:89] found id: ""
	I1206 10:54:25.060162  405191 logs.go:282] 0 containers: []
	W1206 10:54:25.060169  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:25.060177  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:25.060188  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:25.128734  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:25.120144   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:25.120771   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:25.122547   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:25.123145   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:25.124861   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:25.120144   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:25.120771   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:25.122547   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:25.123145   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:25.124861   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:25.128746  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:25.128757  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:25.198421  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:25.198443  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:25.239321  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:25.239341  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:25.316857  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:25.316878  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:27.833465  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:27.844470  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:27.844528  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:27.870606  405191 cri.go:89] found id: ""
	I1206 10:54:27.870621  405191 logs.go:282] 0 containers: []
	W1206 10:54:27.870628  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:27.870633  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:27.870693  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:27.894893  405191 cri.go:89] found id: ""
	I1206 10:54:27.894906  405191 logs.go:282] 0 containers: []
	W1206 10:54:27.894913  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:27.894918  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:27.894973  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:27.920116  405191 cri.go:89] found id: ""
	I1206 10:54:27.920129  405191 logs.go:282] 0 containers: []
	W1206 10:54:27.920136  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:27.920142  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:27.920201  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:27.946774  405191 cri.go:89] found id: ""
	I1206 10:54:27.946788  405191 logs.go:282] 0 containers: []
	W1206 10:54:27.946798  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:27.946806  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:27.946869  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:27.973164  405191 cri.go:89] found id: ""
	I1206 10:54:27.973178  405191 logs.go:282] 0 containers: []
	W1206 10:54:27.973185  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:27.973190  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:27.973247  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:28.005225  405191 cri.go:89] found id: ""
	I1206 10:54:28.005240  405191 logs.go:282] 0 containers: []
	W1206 10:54:28.005248  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:28.005255  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:28.005329  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:28.034341  405191 cri.go:89] found id: ""
	I1206 10:54:28.034355  405191 logs.go:282] 0 containers: []
	W1206 10:54:28.034362  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:28.034370  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:28.034381  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:28.107547  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:28.107567  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:28.136561  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:28.136578  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:28.206187  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:28.206206  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:28.224556  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:28.224580  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:28.311110  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:28.302520   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:28.303509   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:28.305089   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:28.305582   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:28.307158   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:28.302520   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:28.303509   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:28.305089   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:28.305582   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:28.307158   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:30.811550  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:30.821711  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:30.821769  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:30.850956  405191 cri.go:89] found id: ""
	I1206 10:54:30.850970  405191 logs.go:282] 0 containers: []
	W1206 10:54:30.850979  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:30.850984  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:30.851045  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:30.876542  405191 cri.go:89] found id: ""
	I1206 10:54:30.876558  405191 logs.go:282] 0 containers: []
	W1206 10:54:30.876565  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:30.876571  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:30.876630  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:30.902552  405191 cri.go:89] found id: ""
	I1206 10:54:30.902566  405191 logs.go:282] 0 containers: []
	W1206 10:54:30.902573  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:30.902578  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:30.902635  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:30.928737  405191 cri.go:89] found id: ""
	I1206 10:54:30.928751  405191 logs.go:282] 0 containers: []
	W1206 10:54:30.928758  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:30.928764  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:30.928829  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:30.954309  405191 cri.go:89] found id: ""
	I1206 10:54:30.954323  405191 logs.go:282] 0 containers: []
	W1206 10:54:30.954330  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:30.954335  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:30.954394  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:30.980239  405191 cri.go:89] found id: ""
	I1206 10:54:30.980251  405191 logs.go:282] 0 containers: []
	W1206 10:54:30.980258  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:30.980263  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:30.980319  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:31.010962  405191 cri.go:89] found id: ""
	I1206 10:54:31.010977  405191 logs.go:282] 0 containers: []
	W1206 10:54:31.010985  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:31.010994  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:31.011006  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:31.078259  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:31.069995   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:31.070621   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:31.072176   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:31.072646   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:31.074155   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:31.069995   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:31.070621   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:31.072176   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:31.072646   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:31.074155   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:31.078270  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:31.078282  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:31.147428  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:31.147455  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:31.181028  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:31.181045  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:31.253555  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:31.253574  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:33.770610  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:33.781236  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:33.781299  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:33.806547  405191 cri.go:89] found id: ""
	I1206 10:54:33.806561  405191 logs.go:282] 0 containers: []
	W1206 10:54:33.806568  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:33.806574  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:33.806632  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:33.832359  405191 cri.go:89] found id: ""
	I1206 10:54:33.832371  405191 logs.go:282] 0 containers: []
	W1206 10:54:33.832379  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:33.832383  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:33.832442  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:33.857194  405191 cri.go:89] found id: ""
	I1206 10:54:33.857207  405191 logs.go:282] 0 containers: []
	W1206 10:54:33.857214  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:33.857219  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:33.857280  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:33.886113  405191 cri.go:89] found id: ""
	I1206 10:54:33.886126  405191 logs.go:282] 0 containers: []
	W1206 10:54:33.886133  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:33.886138  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:33.886194  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:33.914351  405191 cri.go:89] found id: ""
	I1206 10:54:33.914364  405191 logs.go:282] 0 containers: []
	W1206 10:54:33.914371  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:33.914376  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:33.914438  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:33.939584  405191 cri.go:89] found id: ""
	I1206 10:54:33.939598  405191 logs.go:282] 0 containers: []
	W1206 10:54:33.939605  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:33.939611  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:33.939683  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:33.965467  405191 cri.go:89] found id: ""
	I1206 10:54:33.965481  405191 logs.go:282] 0 containers: []
	W1206 10:54:33.965488  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:33.965496  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:33.965506  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:34.034434  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:34.034456  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:34.068244  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:34.068263  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:34.136528  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:34.136548  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:34.151695  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:34.151713  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:34.237655  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:34.227619   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:34.228750   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:34.231547   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:34.232096   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:34.233598   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:34.227619   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:34.228750   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:34.231547   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:34.232096   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:34.233598   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:36.737997  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:36.748632  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:36.748739  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:36.774541  405191 cri.go:89] found id: ""
	I1206 10:54:36.774554  405191 logs.go:282] 0 containers: []
	W1206 10:54:36.774563  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:36.774568  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:36.774628  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:36.804563  405191 cri.go:89] found id: ""
	I1206 10:54:36.804577  405191 logs.go:282] 0 containers: []
	W1206 10:54:36.804585  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:36.804590  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:36.804649  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:36.829295  405191 cri.go:89] found id: ""
	I1206 10:54:36.829309  405191 logs.go:282] 0 containers: []
	W1206 10:54:36.829316  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:36.829322  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:36.829384  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:36.854740  405191 cri.go:89] found id: ""
	I1206 10:54:36.854754  405191 logs.go:282] 0 containers: []
	W1206 10:54:36.854761  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:36.854767  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:36.854827  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:36.879535  405191 cri.go:89] found id: ""
	I1206 10:54:36.879548  405191 logs.go:282] 0 containers: []
	W1206 10:54:36.879555  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:36.879560  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:36.879621  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:36.908804  405191 cri.go:89] found id: ""
	I1206 10:54:36.908818  405191 logs.go:282] 0 containers: []
	W1206 10:54:36.908826  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:36.908831  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:36.908891  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:36.935290  405191 cri.go:89] found id: ""
	I1206 10:54:36.935312  405191 logs.go:282] 0 containers: []
	W1206 10:54:36.935320  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:36.935328  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:36.935338  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:37.005221  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:37.005253  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:37.023044  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:37.023070  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:37.090033  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:37.082290   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:37.082864   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:37.084384   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:37.084721   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:37.086198   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:37.082290   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:37.082864   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:37.084384   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:37.084721   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:37.086198   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:37.090044  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:37.090055  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:37.158891  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:37.158911  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:39.688451  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:39.698958  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:39.699020  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:39.725003  405191 cri.go:89] found id: ""
	I1206 10:54:39.725017  405191 logs.go:282] 0 containers: []
	W1206 10:54:39.725024  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:39.725029  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:39.725086  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:39.750186  405191 cri.go:89] found id: ""
	I1206 10:54:39.750208  405191 logs.go:282] 0 containers: []
	W1206 10:54:39.750215  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:39.750221  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:39.750286  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:39.777512  405191 cri.go:89] found id: ""
	I1206 10:54:39.777527  405191 logs.go:282] 0 containers: []
	W1206 10:54:39.777534  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:39.777539  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:39.777598  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:39.805960  405191 cri.go:89] found id: ""
	I1206 10:54:39.805974  405191 logs.go:282] 0 containers: []
	W1206 10:54:39.805981  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:39.805987  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:39.806048  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:39.832070  405191 cri.go:89] found id: ""
	I1206 10:54:39.832086  405191 logs.go:282] 0 containers: []
	W1206 10:54:39.832093  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:39.832099  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:39.832162  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:39.856950  405191 cri.go:89] found id: ""
	I1206 10:54:39.856964  405191 logs.go:282] 0 containers: []
	W1206 10:54:39.856970  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:39.856976  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:39.857034  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:39.882830  405191 cri.go:89] found id: ""
	I1206 10:54:39.882844  405191 logs.go:282] 0 containers: []
	W1206 10:54:39.882851  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:39.882859  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:39.882869  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:39.948996  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:39.949016  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:39.964250  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:39.964266  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:40.040200  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:40.026040   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:40.026898   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:40.028891   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:40.029963   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:40.030727   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:40.026040   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:40.026898   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:40.028891   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:40.029963   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:40.030727   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:40.040211  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:40.040222  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:40.112805  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:40.112828  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:42.645898  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:42.656339  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:42.656399  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:42.681441  405191 cri.go:89] found id: ""
	I1206 10:54:42.681456  405191 logs.go:282] 0 containers: []
	W1206 10:54:42.681462  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:42.681468  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:42.681529  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:42.706692  405191 cri.go:89] found id: ""
	I1206 10:54:42.706706  405191 logs.go:282] 0 containers: []
	W1206 10:54:42.706713  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:42.706718  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:42.706781  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:42.734049  405191 cri.go:89] found id: ""
	I1206 10:54:42.734063  405191 logs.go:282] 0 containers: []
	W1206 10:54:42.734070  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:42.734075  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:42.734136  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:42.759095  405191 cri.go:89] found id: ""
	I1206 10:54:42.759115  405191 logs.go:282] 0 containers: []
	W1206 10:54:42.759123  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:42.759128  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:42.759190  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:42.786861  405191 cri.go:89] found id: ""
	I1206 10:54:42.786875  405191 logs.go:282] 0 containers: []
	W1206 10:54:42.786882  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:42.786887  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:42.786949  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:42.817648  405191 cri.go:89] found id: ""
	I1206 10:54:42.817663  405191 logs.go:282] 0 containers: []
	W1206 10:54:42.817670  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:42.817675  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:42.817738  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:42.844223  405191 cri.go:89] found id: ""
	I1206 10:54:42.844245  405191 logs.go:282] 0 containers: []
	W1206 10:54:42.844253  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:42.844261  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:42.844278  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:42.914866  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:42.904424   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:42.904903   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:42.907237   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:42.908578   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:42.909360   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:42.904424   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:42.904903   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:42.907237   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:42.908578   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:42.909360   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:42.914877  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:42.914888  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:42.987160  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:42.987181  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:43.017513  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:43.017529  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:43.084573  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:43.084595  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:45.600685  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:45.611239  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:45.611299  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:45.635510  405191 cri.go:89] found id: ""
	I1206 10:54:45.635525  405191 logs.go:282] 0 containers: []
	W1206 10:54:45.635532  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:45.635538  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:45.635604  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:45.664995  405191 cri.go:89] found id: ""
	I1206 10:54:45.665008  405191 logs.go:282] 0 containers: []
	W1206 10:54:45.665015  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:45.665020  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:45.665077  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:45.691036  405191 cri.go:89] found id: ""
	I1206 10:54:45.691050  405191 logs.go:282] 0 containers: []
	W1206 10:54:45.691057  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:45.691062  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:45.691120  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:45.716374  405191 cri.go:89] found id: ""
	I1206 10:54:45.716388  405191 logs.go:282] 0 containers: []
	W1206 10:54:45.716395  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:45.716400  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:45.716461  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:45.742083  405191 cri.go:89] found id: ""
	I1206 10:54:45.742097  405191 logs.go:282] 0 containers: []
	W1206 10:54:45.742105  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:45.742110  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:45.742177  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:45.767269  405191 cri.go:89] found id: ""
	I1206 10:54:45.767282  405191 logs.go:282] 0 containers: []
	W1206 10:54:45.767290  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:45.767295  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:45.767352  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:45.793130  405191 cri.go:89] found id: ""
	I1206 10:54:45.793144  405191 logs.go:282] 0 containers: []
	W1206 10:54:45.793151  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:45.793158  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:45.793169  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:45.822623  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:45.822639  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:45.889014  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:45.889036  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:45.903697  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:45.903713  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:45.967833  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:45.959169   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:45.960025   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:45.961643   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:45.962228   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:45.963959   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:45.959169   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:45.960025   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:45.961643   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:45.962228   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:45.963959   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:45.967843  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:45.967854  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:48.539593  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:48.549488  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:48.549547  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:48.578962  405191 cri.go:89] found id: ""
	I1206 10:54:48.578976  405191 logs.go:282] 0 containers: []
	W1206 10:54:48.578983  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:48.578989  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:48.579060  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:48.604320  405191 cri.go:89] found id: ""
	I1206 10:54:48.604335  405191 logs.go:282] 0 containers: []
	W1206 10:54:48.604342  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:48.604347  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:48.604407  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:48.630562  405191 cri.go:89] found id: ""
	I1206 10:54:48.630575  405191 logs.go:282] 0 containers: []
	W1206 10:54:48.630583  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:48.630588  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:48.630645  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:48.659186  405191 cri.go:89] found id: ""
	I1206 10:54:48.659200  405191 logs.go:282] 0 containers: []
	W1206 10:54:48.659207  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:48.659218  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:48.659278  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:48.686349  405191 cri.go:89] found id: ""
	I1206 10:54:48.686363  405191 logs.go:282] 0 containers: []
	W1206 10:54:48.686371  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:48.686376  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:48.686433  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:48.712958  405191 cri.go:89] found id: ""
	I1206 10:54:48.712973  405191 logs.go:282] 0 containers: []
	W1206 10:54:48.712980  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:48.712985  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:48.713045  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:48.738763  405191 cri.go:89] found id: ""
	I1206 10:54:48.738777  405191 logs.go:282] 0 containers: []
	W1206 10:54:48.738783  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:48.738791  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:48.738801  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:48.753416  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:48.753431  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:48.818598  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:48.810121   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:48.810830   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:48.812598   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:48.813183   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:48.814760   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:48.810121   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:48.810830   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:48.812598   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:48.813183   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:48.814760   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:48.818609  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:48.818620  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:48.888023  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:48.888043  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:48.917094  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:48.917110  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:51.485627  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:51.497092  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:51.497157  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:51.529254  405191 cri.go:89] found id: ""
	I1206 10:54:51.529268  405191 logs.go:282] 0 containers: []
	W1206 10:54:51.529275  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:51.529281  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:51.529340  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:51.555292  405191 cri.go:89] found id: ""
	I1206 10:54:51.555305  405191 logs.go:282] 0 containers: []
	W1206 10:54:51.555312  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:51.555316  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:51.555390  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:51.580443  405191 cri.go:89] found id: ""
	I1206 10:54:51.580458  405191 logs.go:282] 0 containers: []
	W1206 10:54:51.580465  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:51.580470  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:51.580529  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:51.605907  405191 cri.go:89] found id: ""
	I1206 10:54:51.605921  405191 logs.go:282] 0 containers: []
	W1206 10:54:51.605928  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:51.605933  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:51.605991  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:51.630731  405191 cri.go:89] found id: ""
	I1206 10:54:51.630745  405191 logs.go:282] 0 containers: []
	W1206 10:54:51.630752  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:51.630757  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:51.630816  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:51.655906  405191 cri.go:89] found id: ""
	I1206 10:54:51.655919  405191 logs.go:282] 0 containers: []
	W1206 10:54:51.655926  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:51.655931  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:51.655987  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:51.681242  405191 cri.go:89] found id: ""
	I1206 10:54:51.681256  405191 logs.go:282] 0 containers: []
	W1206 10:54:51.681267  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:51.681275  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:51.681285  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:51.750829  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:51.750849  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:51.766064  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:51.766080  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:51.831905  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:51.823637   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:51.824299   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:51.825840   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:51.826394   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:51.827960   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:51.823637   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:51.824299   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:51.825840   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:51.826394   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:51.827960   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:51.831915  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:51.831925  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:51.901462  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:51.901484  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:54.431319  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:54.441623  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:54.441686  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:54.470441  405191 cri.go:89] found id: ""
	I1206 10:54:54.470456  405191 logs.go:282] 0 containers: []
	W1206 10:54:54.470463  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:54.470469  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:54.470527  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:54.505844  405191 cri.go:89] found id: ""
	I1206 10:54:54.505858  405191 logs.go:282] 0 containers: []
	W1206 10:54:54.505865  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:54.505870  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:54.505931  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:54.540765  405191 cri.go:89] found id: ""
	I1206 10:54:54.540779  405191 logs.go:282] 0 containers: []
	W1206 10:54:54.540786  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:54.540791  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:54.540859  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:54.568534  405191 cri.go:89] found id: ""
	I1206 10:54:54.568559  405191 logs.go:282] 0 containers: []
	W1206 10:54:54.568566  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:54.568571  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:54.568631  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:54.598488  405191 cri.go:89] found id: ""
	I1206 10:54:54.598501  405191 logs.go:282] 0 containers: []
	W1206 10:54:54.598508  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:54.598513  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:54.598573  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:54.625601  405191 cri.go:89] found id: ""
	I1206 10:54:54.625615  405191 logs.go:282] 0 containers: []
	W1206 10:54:54.625622  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:54.625627  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:54.625684  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:54.651039  405191 cri.go:89] found id: ""
	I1206 10:54:54.651053  405191 logs.go:282] 0 containers: []
	W1206 10:54:54.651069  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:54.651077  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:54.651088  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:54.721711  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:54.712700   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:54.713574   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:54.715366   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:54.715761   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:54.717298   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:54.712700   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:54.713574   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:54.715366   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:54.715761   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:54.717298   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:54.721724  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:54.721734  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:54.793778  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:54.793803  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:54.825565  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:54.825580  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:54.891107  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:54.891127  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:57.406177  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:57.416168  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:57.416231  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:57.444260  405191 cri.go:89] found id: ""
	I1206 10:54:57.444274  405191 logs.go:282] 0 containers: []
	W1206 10:54:57.444281  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:57.444286  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:57.444352  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:57.473921  405191 cri.go:89] found id: ""
	I1206 10:54:57.473935  405191 logs.go:282] 0 containers: []
	W1206 10:54:57.473942  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:57.473947  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:57.474006  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:57.507969  405191 cri.go:89] found id: ""
	I1206 10:54:57.507983  405191 logs.go:282] 0 containers: []
	W1206 10:54:57.507990  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:57.507995  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:57.508057  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:57.536405  405191 cri.go:89] found id: ""
	I1206 10:54:57.536420  405191 logs.go:282] 0 containers: []
	W1206 10:54:57.536428  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:57.536433  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:57.536502  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:57.564180  405191 cri.go:89] found id: ""
	I1206 10:54:57.564194  405191 logs.go:282] 0 containers: []
	W1206 10:54:57.564201  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:57.564206  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:57.564271  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:57.594665  405191 cri.go:89] found id: ""
	I1206 10:54:57.594679  405191 logs.go:282] 0 containers: []
	W1206 10:54:57.594687  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:57.594692  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:57.594751  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:57.627345  405191 cri.go:89] found id: ""
	I1206 10:54:57.627360  405191 logs.go:282] 0 containers: []
	W1206 10:54:57.627367  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:57.627398  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:57.627409  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:57.694026  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:57.694046  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:57.708621  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:57.708636  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:57.772743  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:57.764569   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:57.765305   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:57.766828   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:57.767291   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:57.768789   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:57.764569   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:57.765305   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:57.766828   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:57.767291   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:57.768789   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:57.772753  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:57.772764  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:57.841816  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:57.841836  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:00.375636  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:00.396560  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:00.396634  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:00.458455  405191 cri.go:89] found id: ""
	I1206 10:55:00.458471  405191 logs.go:282] 0 containers: []
	W1206 10:55:00.458479  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:00.458485  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:00.458553  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:00.497287  405191 cri.go:89] found id: ""
	I1206 10:55:00.497304  405191 logs.go:282] 0 containers: []
	W1206 10:55:00.497311  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:00.497317  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:00.497382  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:00.531076  405191 cri.go:89] found id: ""
	I1206 10:55:00.531092  405191 logs.go:282] 0 containers: []
	W1206 10:55:00.531099  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:00.531104  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:00.531172  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:00.567464  405191 cri.go:89] found id: ""
	I1206 10:55:00.567485  405191 logs.go:282] 0 containers: []
	W1206 10:55:00.567493  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:00.567499  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:00.567600  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:00.600497  405191 cri.go:89] found id: ""
	I1206 10:55:00.600512  405191 logs.go:282] 0 containers: []
	W1206 10:55:00.600520  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:00.600526  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:00.600596  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:00.648830  405191 cri.go:89] found id: ""
	I1206 10:55:00.648852  405191 logs.go:282] 0 containers: []
	W1206 10:55:00.648861  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:00.648868  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:00.648939  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:00.678773  405191 cri.go:89] found id: ""
	I1206 10:55:00.678789  405191 logs.go:282] 0 containers: []
	W1206 10:55:00.678797  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:00.678822  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:00.678834  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:00.748615  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:00.748637  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:00.764401  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:00.764420  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:00.836152  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:00.827231   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:00.828085   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:00.830005   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:00.830399   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:00.832026   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:00.827231   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:00.828085   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:00.830005   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:00.830399   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:00.832026   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:00.836163  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:00.836174  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:00.909732  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:00.909761  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:03.441095  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:03.451635  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:03.451701  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:03.486201  405191 cri.go:89] found id: ""
	I1206 10:55:03.486214  405191 logs.go:282] 0 containers: []
	W1206 10:55:03.486222  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:03.486226  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:03.486286  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:03.530153  405191 cri.go:89] found id: ""
	I1206 10:55:03.530167  405191 logs.go:282] 0 containers: []
	W1206 10:55:03.530174  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:03.530179  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:03.530243  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:03.559790  405191 cri.go:89] found id: ""
	I1206 10:55:03.559804  405191 logs.go:282] 0 containers: []
	W1206 10:55:03.559811  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:03.559816  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:03.559874  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:03.586392  405191 cri.go:89] found id: ""
	I1206 10:55:03.586406  405191 logs.go:282] 0 containers: []
	W1206 10:55:03.586413  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:03.586418  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:03.586477  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:03.612699  405191 cri.go:89] found id: ""
	I1206 10:55:03.612714  405191 logs.go:282] 0 containers: []
	W1206 10:55:03.612726  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:03.612732  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:03.612827  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:03.641895  405191 cri.go:89] found id: ""
	I1206 10:55:03.641909  405191 logs.go:282] 0 containers: []
	W1206 10:55:03.641916  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:03.641921  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:03.641978  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:03.668194  405191 cri.go:89] found id: ""
	I1206 10:55:03.668208  405191 logs.go:282] 0 containers: []
	W1206 10:55:03.668216  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:03.668224  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:03.668234  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:03.738567  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:03.738585  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:03.753715  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:03.753732  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:03.819356  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:03.811487   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:03.812006   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:03.813500   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:03.813921   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:03.815528   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:03.811487   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:03.812006   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:03.813500   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:03.813921   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:03.815528   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:03.819368  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:03.819393  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:03.888845  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:03.888866  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:06.421279  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:06.431630  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:06.431691  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:06.457432  405191 cri.go:89] found id: ""
	I1206 10:55:06.457446  405191 logs.go:282] 0 containers: []
	W1206 10:55:06.457453  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:06.457458  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:06.457525  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:06.498897  405191 cri.go:89] found id: ""
	I1206 10:55:06.498911  405191 logs.go:282] 0 containers: []
	W1206 10:55:06.498918  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:06.498923  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:06.498994  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:06.532288  405191 cri.go:89] found id: ""
	I1206 10:55:06.532320  405191 logs.go:282] 0 containers: []
	W1206 10:55:06.532328  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:06.532332  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:06.532403  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:06.558737  405191 cri.go:89] found id: ""
	I1206 10:55:06.558751  405191 logs.go:282] 0 containers: []
	W1206 10:55:06.558758  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:06.558764  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:06.558835  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:06.588791  405191 cri.go:89] found id: ""
	I1206 10:55:06.588805  405191 logs.go:282] 0 containers: []
	W1206 10:55:06.588813  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:06.588818  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:06.588887  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:06.615097  405191 cri.go:89] found id: ""
	I1206 10:55:06.615110  405191 logs.go:282] 0 containers: []
	W1206 10:55:06.615117  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:06.615122  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:06.615182  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:06.640273  405191 cri.go:89] found id: ""
	I1206 10:55:06.640297  405191 logs.go:282] 0 containers: []
	W1206 10:55:06.640305  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:06.640312  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:06.640323  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:06.709781  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:06.709800  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:06.724307  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:06.724323  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:06.788894  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:06.780020   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:06.780621   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:06.782266   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:06.782823   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:06.784380   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:06.780020   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:06.780621   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:06.782266   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:06.782823   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:06.784380   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:06.788903  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:06.788913  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:06.857942  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:06.857963  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:09.392819  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:09.402617  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:09.402675  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:09.429928  405191 cri.go:89] found id: ""
	I1206 10:55:09.429942  405191 logs.go:282] 0 containers: []
	W1206 10:55:09.429949  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:09.429955  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:09.430018  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:09.455893  405191 cri.go:89] found id: ""
	I1206 10:55:09.455907  405191 logs.go:282] 0 containers: []
	W1206 10:55:09.455913  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:09.455918  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:09.455975  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:09.492759  405191 cri.go:89] found id: ""
	I1206 10:55:09.492772  405191 logs.go:282] 0 containers: []
	W1206 10:55:09.492779  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:09.492784  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:09.492842  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:09.524405  405191 cri.go:89] found id: ""
	I1206 10:55:09.524418  405191 logs.go:282] 0 containers: []
	W1206 10:55:09.524425  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:09.524430  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:09.524488  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:09.555465  405191 cri.go:89] found id: ""
	I1206 10:55:09.555479  405191 logs.go:282] 0 containers: []
	W1206 10:55:09.555486  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:09.555491  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:09.555551  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:09.582561  405191 cri.go:89] found id: ""
	I1206 10:55:09.582575  405191 logs.go:282] 0 containers: []
	W1206 10:55:09.582582  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:09.582588  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:09.582646  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:09.608767  405191 cri.go:89] found id: ""
	I1206 10:55:09.608781  405191 logs.go:282] 0 containers: []
	W1206 10:55:09.608788  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:09.608796  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:09.608810  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:09.677518  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:09.677539  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:09.692935  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:09.692955  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:09.760066  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:09.750973   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:09.751783   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:09.753612   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:09.754387   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:09.755955   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:09.750973   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:09.751783   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:09.753612   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:09.754387   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:09.755955   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:09.760077  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:09.760087  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:09.829605  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:09.829626  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:12.359607  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:12.370647  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:12.370708  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:12.402338  405191 cri.go:89] found id: ""
	I1206 10:55:12.402353  405191 logs.go:282] 0 containers: []
	W1206 10:55:12.402361  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:12.402366  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:12.402435  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:12.428498  405191 cri.go:89] found id: ""
	I1206 10:55:12.428513  405191 logs.go:282] 0 containers: []
	W1206 10:55:12.428520  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:12.428525  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:12.428587  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:12.454311  405191 cri.go:89] found id: ""
	I1206 10:55:12.454325  405191 logs.go:282] 0 containers: []
	W1206 10:55:12.454333  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:12.454338  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:12.454399  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:12.493402  405191 cri.go:89] found id: ""
	I1206 10:55:12.493416  405191 logs.go:282] 0 containers: []
	W1206 10:55:12.493423  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:12.493429  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:12.493487  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:12.527015  405191 cri.go:89] found id: ""
	I1206 10:55:12.527029  405191 logs.go:282] 0 containers: []
	W1206 10:55:12.527036  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:12.527042  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:12.527103  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:12.556788  405191 cri.go:89] found id: ""
	I1206 10:55:12.556812  405191 logs.go:282] 0 containers: []
	W1206 10:55:12.556820  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:12.556825  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:12.556897  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:12.584336  405191 cri.go:89] found id: ""
	I1206 10:55:12.584350  405191 logs.go:282] 0 containers: []
	W1206 10:55:12.584357  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:12.584365  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:12.584376  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:12.614039  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:12.614055  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:12.680316  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:12.680338  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:12.696525  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:12.696542  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:12.760110  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:12.751882   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:12.752591   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:12.754143   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:12.754484   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:12.756046   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:12.751882   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:12.752591   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:12.754143   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:12.754484   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:12.756046   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:12.760120  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:12.760131  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:15.332168  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:15.342873  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:15.342950  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:15.371175  405191 cri.go:89] found id: ""
	I1206 10:55:15.371189  405191 logs.go:282] 0 containers: []
	W1206 10:55:15.371207  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:15.371212  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:15.371279  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:15.397085  405191 cri.go:89] found id: ""
	I1206 10:55:15.397100  405191 logs.go:282] 0 containers: []
	W1206 10:55:15.397107  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:15.397112  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:15.397171  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:15.422142  405191 cri.go:89] found id: ""
	I1206 10:55:15.422156  405191 logs.go:282] 0 containers: []
	W1206 10:55:15.422163  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:15.422174  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:15.422231  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:15.447127  405191 cri.go:89] found id: ""
	I1206 10:55:15.447141  405191 logs.go:282] 0 containers: []
	W1206 10:55:15.447148  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:15.447154  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:15.447212  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:15.477786  405191 cri.go:89] found id: ""
	I1206 10:55:15.477800  405191 logs.go:282] 0 containers: []
	W1206 10:55:15.477808  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:15.477813  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:15.477875  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:15.507270  405191 cri.go:89] found id: ""
	I1206 10:55:15.507285  405191 logs.go:282] 0 containers: []
	W1206 10:55:15.507292  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:15.507297  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:15.507360  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:15.536433  405191 cri.go:89] found id: ""
	I1206 10:55:15.536451  405191 logs.go:282] 0 containers: []
	W1206 10:55:15.536458  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:15.536470  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:15.536480  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:15.608040  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:15.608061  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:15.623617  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:15.623635  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:15.692548  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:15.684603   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:15.685140   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:15.686901   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:15.687564   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:15.688573   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:15.684603   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:15.685140   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:15.686901   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:15.687564   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:15.688573   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:15.692558  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:15.692581  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:15.760517  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:15.760537  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:18.289173  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:18.300544  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:18.300610  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:18.327678  405191 cri.go:89] found id: ""
	I1206 10:55:18.327692  405191 logs.go:282] 0 containers: []
	W1206 10:55:18.327699  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:18.327704  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:18.327764  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:18.353999  405191 cri.go:89] found id: ""
	I1206 10:55:18.354014  405191 logs.go:282] 0 containers: []
	W1206 10:55:18.354021  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:18.354026  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:18.354084  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:18.382276  405191 cri.go:89] found id: ""
	I1206 10:55:18.382291  405191 logs.go:282] 0 containers: []
	W1206 10:55:18.382298  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:18.382304  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:18.382365  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:18.410827  405191 cri.go:89] found id: ""
	I1206 10:55:18.410841  405191 logs.go:282] 0 containers: []
	W1206 10:55:18.410847  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:18.410852  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:18.410911  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:18.436138  405191 cri.go:89] found id: ""
	I1206 10:55:18.436160  405191 logs.go:282] 0 containers: []
	W1206 10:55:18.436167  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:18.436172  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:18.436233  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:18.462254  405191 cri.go:89] found id: ""
	I1206 10:55:18.462269  405191 logs.go:282] 0 containers: []
	W1206 10:55:18.462276  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:18.462283  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:18.462346  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:18.492347  405191 cri.go:89] found id: ""
	I1206 10:55:18.492362  405191 logs.go:282] 0 containers: []
	W1206 10:55:18.492369  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:18.492377  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:18.492388  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:18.509956  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:18.509973  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:18.581031  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:18.572020   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:18.572812   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:18.573929   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:18.574912   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:18.575787   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:18.572020   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:18.572812   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:18.573929   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:18.574912   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:18.575787   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:18.581041  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:18.581055  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:18.650942  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:18.650963  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:18.680668  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:18.680685  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:21.248379  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:21.258903  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:21.258982  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:21.286273  405191 cri.go:89] found id: ""
	I1206 10:55:21.286288  405191 logs.go:282] 0 containers: []
	W1206 10:55:21.286295  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:21.286300  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:21.286357  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:21.311824  405191 cri.go:89] found id: ""
	I1206 10:55:21.311841  405191 logs.go:282] 0 containers: []
	W1206 10:55:21.311851  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:21.311857  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:21.311923  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:21.338690  405191 cri.go:89] found id: ""
	I1206 10:55:21.338704  405191 logs.go:282] 0 containers: []
	W1206 10:55:21.338711  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:21.338716  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:21.338773  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:21.365841  405191 cri.go:89] found id: ""
	I1206 10:55:21.365855  405191 logs.go:282] 0 containers: []
	W1206 10:55:21.365862  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:21.365868  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:21.365926  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:21.396001  405191 cri.go:89] found id: ""
	I1206 10:55:21.396035  405191 logs.go:282] 0 containers: []
	W1206 10:55:21.396043  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:21.396049  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:21.396118  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:21.421823  405191 cri.go:89] found id: ""
	I1206 10:55:21.421837  405191 logs.go:282] 0 containers: []
	W1206 10:55:21.421856  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:21.421862  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:21.421934  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:21.449590  405191 cri.go:89] found id: ""
	I1206 10:55:21.449604  405191 logs.go:282] 0 containers: []
	W1206 10:55:21.449611  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:21.449619  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:21.449631  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:21.464618  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:21.464634  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:21.543901  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:21.526696   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:21.535561   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:21.536267   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:21.537985   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:21.538498   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:21.526696   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:21.535561   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:21.536267   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:21.537985   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:21.538498   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:21.543913  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:21.543926  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:21.614646  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:21.614669  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:21.645809  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:21.645825  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:24.214037  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:24.226008  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:24.226071  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:24.252473  405191 cri.go:89] found id: ""
	I1206 10:55:24.252487  405191 logs.go:282] 0 containers: []
	W1206 10:55:24.252495  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:24.252500  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:24.252560  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:24.280242  405191 cri.go:89] found id: ""
	I1206 10:55:24.280256  405191 logs.go:282] 0 containers: []
	W1206 10:55:24.280263  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:24.280268  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:24.280328  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:24.307083  405191 cri.go:89] found id: ""
	I1206 10:55:24.307098  405191 logs.go:282] 0 containers: []
	W1206 10:55:24.307105  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:24.307111  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:24.307181  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:24.333215  405191 cri.go:89] found id: ""
	I1206 10:55:24.333230  405191 logs.go:282] 0 containers: []
	W1206 10:55:24.333239  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:24.333245  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:24.333312  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:24.364248  405191 cri.go:89] found id: ""
	I1206 10:55:24.364262  405191 logs.go:282] 0 containers: []
	W1206 10:55:24.364269  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:24.364275  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:24.364340  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:24.392539  405191 cri.go:89] found id: ""
	I1206 10:55:24.392554  405191 logs.go:282] 0 containers: []
	W1206 10:55:24.392561  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:24.392567  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:24.392631  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:24.419045  405191 cri.go:89] found id: ""
	I1206 10:55:24.419059  405191 logs.go:282] 0 containers: []
	W1206 10:55:24.419066  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:24.419074  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:24.419084  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:24.485101  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:24.485123  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:24.506235  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:24.506258  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:24.586208  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:24.577740   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:24.578227   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:24.579907   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:24.580253   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:24.581928   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:24.577740   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:24.578227   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:24.579907   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:24.580253   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:24.581928   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:24.586218  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:24.586230  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:24.654219  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:24.654241  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:27.183198  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:27.194048  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:27.194116  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:27.223948  405191 cri.go:89] found id: ""
	I1206 10:55:27.223962  405191 logs.go:282] 0 containers: []
	W1206 10:55:27.223969  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:27.223974  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:27.224033  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:27.255792  405191 cri.go:89] found id: ""
	I1206 10:55:27.255807  405191 logs.go:282] 0 containers: []
	W1206 10:55:27.255814  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:27.255819  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:27.255882  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:27.285352  405191 cri.go:89] found id: ""
	I1206 10:55:27.285365  405191 logs.go:282] 0 containers: []
	W1206 10:55:27.285373  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:27.285380  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:27.285438  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:27.311572  405191 cri.go:89] found id: ""
	I1206 10:55:27.311599  405191 logs.go:282] 0 containers: []
	W1206 10:55:27.311606  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:27.311612  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:27.311684  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:27.337727  405191 cri.go:89] found id: ""
	I1206 10:55:27.337741  405191 logs.go:282] 0 containers: []
	W1206 10:55:27.337747  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:27.337753  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:27.337812  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:27.363513  405191 cri.go:89] found id: ""
	I1206 10:55:27.363527  405191 logs.go:282] 0 containers: []
	W1206 10:55:27.363534  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:27.363539  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:27.363611  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:27.390072  405191 cri.go:89] found id: ""
	I1206 10:55:27.390100  405191 logs.go:282] 0 containers: []
	W1206 10:55:27.390107  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:27.390115  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:27.390130  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:27.456548  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:27.456567  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:27.472626  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:27.472642  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:27.554055  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:27.545736   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:27.546254   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:27.548095   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:27.548446   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:27.550070   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:27.545736   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:27.546254   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:27.548095   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:27.548446   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:27.550070   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:27.554065  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:27.554076  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:27.622961  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:27.622984  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:30.156731  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:30.168052  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:30.168115  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:30.195190  405191 cri.go:89] found id: ""
	I1206 10:55:30.195205  405191 logs.go:282] 0 containers: []
	W1206 10:55:30.195237  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:30.195243  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:30.195315  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:30.222581  405191 cri.go:89] found id: ""
	I1206 10:55:30.222615  405191 logs.go:282] 0 containers: []
	W1206 10:55:30.222622  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:30.222628  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:30.222697  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:30.251144  405191 cri.go:89] found id: ""
	I1206 10:55:30.251162  405191 logs.go:282] 0 containers: []
	W1206 10:55:30.251173  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:30.251178  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:30.251280  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:30.282704  405191 cri.go:89] found id: ""
	I1206 10:55:30.282731  405191 logs.go:282] 0 containers: []
	W1206 10:55:30.282739  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:30.282744  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:30.282818  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:30.308787  405191 cri.go:89] found id: ""
	I1206 10:55:30.308802  405191 logs.go:282] 0 containers: []
	W1206 10:55:30.308809  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:30.308814  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:30.308881  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:30.334479  405191 cri.go:89] found id: ""
	I1206 10:55:30.334494  405191 logs.go:282] 0 containers: []
	W1206 10:55:30.334501  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:30.334507  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:30.334582  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:30.361350  405191 cri.go:89] found id: ""
	I1206 10:55:30.361365  405191 logs.go:282] 0 containers: []
	W1206 10:55:30.361372  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:30.361380  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:30.361390  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:30.438089  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:30.438120  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:30.453200  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:30.453217  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:30.539250  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:30.524592   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:30.527641   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:30.528089   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:30.529752   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:30.530427   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:30.524592   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:30.527641   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:30.528089   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:30.529752   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:30.530427   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:30.539272  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:30.539285  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:30.610101  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:30.610121  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:33.143484  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:33.153906  405191 kubeadm.go:602] duration metric: took 4m2.63956924s to restartPrimaryControlPlane
	W1206 10:55:33.153970  405191 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1206 10:55:33.154044  405191 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1206 10:55:33.564051  405191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:55:33.577264  405191 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 10:55:33.585285  405191 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:55:33.585343  405191 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:55:33.593207  405191 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:55:33.593217  405191 kubeadm.go:158] found existing configuration files:
	
	I1206 10:55:33.593284  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:55:33.601281  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:55:33.601338  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:55:33.609078  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:55:33.617336  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:55:33.617395  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:55:33.625100  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:55:33.633096  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:55:33.633153  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:55:33.640767  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:55:33.648692  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:55:33.648783  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:55:33.656355  405191 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:55:33.695114  405191 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 10:55:33.695495  405191 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:55:33.776558  405191 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:55:33.776622  405191 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:55:33.776656  405191 kubeadm.go:319] OS: Linux
	I1206 10:55:33.776700  405191 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:55:33.776747  405191 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:55:33.776793  405191 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:55:33.776839  405191 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:55:33.776886  405191 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:55:33.776933  405191 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:55:33.776976  405191 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:55:33.777023  405191 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:55:33.777067  405191 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:55:33.839562  405191 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:55:33.839700  405191 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:55:33.839825  405191 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:55:33.847872  405191 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:55:33.851528  405191 out.go:252]   - Generating certificates and keys ...
	I1206 10:55:33.851642  405191 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:55:33.851732  405191 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:55:33.851823  405191 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 10:55:33.851888  405191 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 10:55:33.851963  405191 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 10:55:33.852020  405191 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 10:55:33.852092  405191 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 10:55:33.852157  405191 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 10:55:33.852236  405191 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 10:55:33.852314  405191 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 10:55:33.852354  405191 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 10:55:33.852412  405191 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:55:34.131310  405191 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:55:34.288855  405191 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:55:34.553487  405191 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:55:35.148231  405191 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:55:35.211116  405191 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:55:35.211864  405191 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:55:35.214714  405191 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:55:35.218231  405191 out.go:252]   - Booting up control plane ...
	I1206 10:55:35.218330  405191 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:55:35.218406  405191 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:55:35.218472  405191 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:55:35.235870  405191 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:55:35.235976  405191 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:55:35.244902  405191 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:55:35.245320  405191 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:55:35.245379  405191 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:55:35.375634  405191 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:55:35.375747  405191 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:59:35.374512  405191 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000270227s
	I1206 10:59:35.374544  405191 kubeadm.go:319] 
	I1206 10:59:35.374605  405191 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 10:59:35.374643  405191 kubeadm.go:319] 	- The kubelet is not running
	I1206 10:59:35.374758  405191 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 10:59:35.374763  405191 kubeadm.go:319] 
	I1206 10:59:35.374876  405191 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 10:59:35.374910  405191 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 10:59:35.374942  405191 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 10:59:35.374945  405191 kubeadm.go:319] 
	I1206 10:59:35.380563  405191 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 10:59:35.380998  405191 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 10:59:35.381115  405191 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:59:35.381348  405191 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 10:59:35.381353  405191 kubeadm.go:319] 
	I1206 10:59:35.381420  405191 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1206 10:59:35.381523  405191 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000270227s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1206 10:59:35.381613  405191 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1206 10:59:35.796714  405191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:59:35.809334  405191 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:59:35.809388  405191 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:59:35.817444  405191 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:59:35.817452  405191 kubeadm.go:158] found existing configuration files:
	
	I1206 10:59:35.817502  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:59:35.825442  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:59:35.825501  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:59:35.833082  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:59:35.842093  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:59:35.842159  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:59:35.851759  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:59:35.860099  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:59:35.860161  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:59:35.867900  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:59:35.876130  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:59:35.876188  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:59:35.884013  405191 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:59:35.926383  405191 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 10:59:35.926438  405191 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:59:36.016832  405191 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:59:36.016925  405191 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:59:36.016974  405191 kubeadm.go:319] OS: Linux
	I1206 10:59:36.017019  405191 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:59:36.017071  405191 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:59:36.017119  405191 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:59:36.017173  405191 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:59:36.017220  405191 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:59:36.017277  405191 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:59:36.017339  405191 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:59:36.017401  405191 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:59:36.017447  405191 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:59:36.080832  405191 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:59:36.080951  405191 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:59:36.081048  405191 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:59:36.091906  405191 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:59:36.097223  405191 out.go:252]   - Generating certificates and keys ...
	I1206 10:59:36.097345  405191 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:59:36.097426  405191 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:59:36.097511  405191 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 10:59:36.097596  405191 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 10:59:36.097675  405191 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 10:59:36.097750  405191 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 10:59:36.097815  405191 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 10:59:36.097876  405191 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 10:59:36.097954  405191 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 10:59:36.098026  405191 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 10:59:36.098063  405191 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 10:59:36.098122  405191 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:59:36.705762  405191 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:59:36.885173  405191 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:59:37.204953  405191 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:59:37.715956  405191 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:59:37.848965  405191 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:59:37.849735  405191 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:59:37.853600  405191 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:59:37.856590  405191 out.go:252]   - Booting up control plane ...
	I1206 10:59:37.856698  405191 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:59:37.856819  405191 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:59:37.858671  405191 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:59:37.873039  405191 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:59:37.873143  405191 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:59:37.880838  405191 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:59:37.881129  405191 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:59:37.881370  405191 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:59:38.015956  405191 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:59:38.016070  405191 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 11:03:38.011572  405191 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000448393s
	I1206 11:03:38.011605  405191 kubeadm.go:319] 
	I1206 11:03:38.011721  405191 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 11:03:38.011777  405191 kubeadm.go:319] 	- The kubelet is not running
	I1206 11:03:38.012051  405191 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 11:03:38.012060  405191 kubeadm.go:319] 
	I1206 11:03:38.012421  405191 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 11:03:38.012573  405191 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 11:03:38.012628  405191 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 11:03:38.012633  405191 kubeadm.go:319] 
	I1206 11:03:38.018189  405191 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 11:03:38.018608  405191 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 11:03:38.018716  405191 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 11:03:38.018960  405191 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 11:03:38.018965  405191 kubeadm.go:319] 
	I1206 11:03:38.019033  405191 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 11:03:38.019089  405191 kubeadm.go:403] duration metric: took 12m7.551905569s to StartCluster
	I1206 11:03:38.019121  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:03:38.019191  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:03:38.048894  405191 cri.go:89] found id: ""
	I1206 11:03:38.048909  405191 logs.go:282] 0 containers: []
	W1206 11:03:38.048917  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:03:38.048922  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:03:38.049009  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:03:38.077125  405191 cri.go:89] found id: ""
	I1206 11:03:38.077141  405191 logs.go:282] 0 containers: []
	W1206 11:03:38.077149  405191 logs.go:284] No container was found matching "etcd"
	I1206 11:03:38.077154  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:03:38.077229  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:03:38.104859  405191 cri.go:89] found id: ""
	I1206 11:03:38.104873  405191 logs.go:282] 0 containers: []
	W1206 11:03:38.104881  405191 logs.go:284] No container was found matching "coredns"
	I1206 11:03:38.104886  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:03:38.104946  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:03:38.131268  405191 cri.go:89] found id: ""
	I1206 11:03:38.131282  405191 logs.go:282] 0 containers: []
	W1206 11:03:38.131289  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:03:38.131295  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:03:38.131356  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:03:38.161469  405191 cri.go:89] found id: ""
	I1206 11:03:38.161483  405191 logs.go:282] 0 containers: []
	W1206 11:03:38.161490  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:03:38.161495  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:03:38.161555  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:03:38.191440  405191 cri.go:89] found id: ""
	I1206 11:03:38.191454  405191 logs.go:282] 0 containers: []
	W1206 11:03:38.191461  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:03:38.191467  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:03:38.191536  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:03:38.219921  405191 cri.go:89] found id: ""
	I1206 11:03:38.219935  405191 logs.go:282] 0 containers: []
	W1206 11:03:38.219943  405191 logs.go:284] No container was found matching "kindnet"
	I1206 11:03:38.219951  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:03:38.219962  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:03:38.285137  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:03:38.277076   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:38.277519   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:38.279007   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:38.279647   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:38.281164   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:03:38.277076   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:38.277519   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:38.279007   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:38.279647   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:38.281164   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:03:38.285157  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:03:38.285169  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:03:38.355235  405191 logs.go:123] Gathering logs for container status ...
	I1206 11:03:38.355259  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:03:38.391661  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 11:03:38.391679  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:03:38.462714  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 11:03:38.462733  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1206 11:03:38.480853  405191 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000448393s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 11:03:38.480894  405191 out.go:285] * 
	W1206 11:03:38.480951  405191 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000448393s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 11:03:38.480964  405191 out.go:285] * 
	W1206 11:03:38.483093  405191 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 11:03:38.488282  405191 out.go:203] 
	W1206 11:03:38.491978  405191 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000448393s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 11:03:38.492089  405191 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 11:03:38.492161  405191 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 11:03:38.495164  405191 out.go:203] 
	
	
	==> CRI-O <==
	Dec 06 10:51:28 functional-196950 crio[9931]: time="2025-12-06T10:51:28.716514999Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 06 10:51:28 functional-196950 crio[9931]: time="2025-12-06T10:51:28.716552218Z" level=info msg="Starting seccomp notifier watcher"
	Dec 06 10:51:28 functional-196950 crio[9931]: time="2025-12-06T10:51:28.716594951Z" level=info msg="Create NRI interface"
	Dec 06 10:51:28 functional-196950 crio[9931]: time="2025-12-06T10:51:28.716700199Z" level=info msg="built-in NRI default validator is disabled"
	Dec 06 10:51:28 functional-196950 crio[9931]: time="2025-12-06T10:51:28.716709069Z" level=info msg="runtime interface created"
	Dec 06 10:51:28 functional-196950 crio[9931]: time="2025-12-06T10:51:28.716719834Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 06 10:51:28 functional-196950 crio[9931]: time="2025-12-06T10:51:28.716725832Z" level=info msg="runtime interface starting up..."
	Dec 06 10:51:28 functional-196950 crio[9931]: time="2025-12-06T10:51:28.716731789Z" level=info msg="starting plugins..."
	Dec 06 10:51:28 functional-196950 crio[9931]: time="2025-12-06T10:51:28.716744294Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 06 10:51:28 functional-196950 crio[9931]: time="2025-12-06T10:51:28.71680827Z" level=info msg="No systemd watchdog enabled"
	Dec 06 10:51:28 functional-196950 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 06 10:55:33 functional-196950 crio[9931]: time="2025-12-06T10:55:33.843321747Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=5f168690-0479-4b67-8846-d623c54570c0 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:55:33 functional-196950 crio[9931]: time="2025-12-06T10:55:33.844283422Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=5ac9537d-0142-4bb1-b0d9-019b296bd707 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:55:33 functional-196950 crio[9931]: time="2025-12-06T10:55:33.844830562Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=29c4c588-f2de-4c84-8064-807353d6179d name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:55:33 functional-196950 crio[9931]: time="2025-12-06T10:55:33.845343116Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=1da192f7-cfcc-42b9-837a-9f285f929dcd name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:55:33 functional-196950 crio[9931]: time="2025-12-06T10:55:33.845798415Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=320d875b-b045-44db-aacf-14add6cc927b name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:55:33 functional-196950 crio[9931]: time="2025-12-06T10:55:33.846245664Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=dd8834be-d268-43d9-854b-d34b54047169 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:55:33 functional-196950 crio[9931]: time="2025-12-06T10:55:33.846769058Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=25869f1e-412e-46d2-b706-063f94749122 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.084499828Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=f0fd0946-b323-435f-946c-e412850eb9c0 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.085495997Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=34b6bb47-44a4-4780-9567-04c497973fa7 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.08608111Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=e26253f3-5094-4fbe-b6d1-306f2e31fa9a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.086661177Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=f8a4ffa3-a2d3-4c05-ba97-fd167ad1ff4e name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.087187984Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=43819321-fbd2-4155-9f0b-c716c27fc9ce name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.087957288Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=aeb2085e-c4e7-4d42-9049-5042f515cbdb name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.088481658Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=0f01a640-f6a6-41dd-afc3-f5cae208f89a name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:03:42.208161   21384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:42.208843   21384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:42.210812   21384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:42.211326   21384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:42.213403   21384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014752] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503231] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.065820] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.901896] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.944603] kauditd_printk_skb: 39 callbacks suppressed
	[Dec 6 09:04] hrtimer: interrupt took 32230230 ns
	[Dec 6 10:25] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 6 10:26] overlayfs: idmapped layers are currently not supported
	[  +0.066821] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 6 10:32] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:33] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:51] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 11:03:42 up  2:46,  0 user,  load average: 0.36, 0.23, 0.51
	Linux functional-196950 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 11:03:40 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:03:40 functional-196950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1805.
	Dec 06 11:03:40 functional-196950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:03:40 functional-196950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:03:40 functional-196950 kubelet[21271]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:03:40 functional-196950 kubelet[21271]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:03:40 functional-196950 kubelet[21271]: E1206 11:03:40.776501   21271 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:03:40 functional-196950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:03:40 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:03:41 functional-196950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1806.
	Dec 06 11:03:41 functional-196950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:03:41 functional-196950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:03:41 functional-196950 kubelet[21300]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:03:41 functional-196950 kubelet[21300]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:03:41 functional-196950 kubelet[21300]: E1206 11:03:41.532281   21300 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:03:41 functional-196950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:03:41 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:03:42 functional-196950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1807.
	Dec 06 11:03:42 functional-196950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:03:42 functional-196950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:03:42 functional-196950 kubelet[21389]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:03:42 functional-196950 kubelet[21389]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:03:42 functional-196950 kubelet[21389]: E1206 11:03:42.287726   21389 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:03:42 functional-196950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:03:42 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-196950 -n functional-196950
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-196950 -n functional-196950: exit status 2 (362.312435ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-196950" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-196950 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-196950 apply -f testdata/invalidsvc.yaml: exit status 1 (74.534883ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-196950 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.78s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-196950 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-196950 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-196950 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-196950 --alsologtostderr -v=1] stderr:
I1206 11:05:42.208761  424023 out.go:360] Setting OutFile to fd 1 ...
I1206 11:05:42.208898  424023 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 11:05:42.208911  424023 out.go:374] Setting ErrFile to fd 2...
I1206 11:05:42.208916  424023 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 11:05:42.209207  424023 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
I1206 11:05:42.209550  424023 mustload.go:66] Loading cluster: functional-196950
I1206 11:05:42.210015  424023 config.go:182] Loaded profile config "functional-196950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 11:05:42.210531  424023 cli_runner.go:164] Run: docker container inspect functional-196950 --format={{.State.Status}}
I1206 11:05:42.230550  424023 host.go:66] Checking if "functional-196950" exists ...
I1206 11:05:42.230922  424023 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1206 11:05:42.294334  424023 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:05:42.284431918 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1206 11:05:42.294472  424023 api_server.go:166] Checking apiserver status ...
I1206 11:05:42.294542  424023 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1206 11:05:42.294604  424023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
I1206 11:05:42.312924  424023 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
W1206 11:05:42.421222  424023 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1206 11:05:42.424502  424023 out.go:179] * The control-plane node functional-196950 apiserver is not running: (state=Stopped)
I1206 11:05:42.427329  424023 out.go:179]   To start a cluster, run: "minikube start -p functional-196950"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-196950
helpers_test.go:243: (dbg) docker inspect functional-196950:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1",
	        "Created": "2025-12-06T10:36:45.201779678Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 393848,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T10:36:45.318229053Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/hostname",
	        "HostsPath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/hosts",
	        "LogPath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1-json.log",
	        "Name": "/functional-196950",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-196950:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-196950",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1",
	                "LowerDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1-init/diff:/var/lib/docker/overlay2/5011226d55616c9977b14c1fe617d1302fe59373df05ce8ec6e21b79143a1c57/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-196950",
	                "Source": "/var/lib/docker/volumes/functional-196950/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-196950",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-196950",
	                "name.minikube.sigs.k8s.io": "functional-196950",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9b8f961d55d7529aed7b841f2ac9f818c22ff12b8ad73f2d6bcee22656d9749a",
	            "SandboxKey": "/var/run/docker/netns/9b8f961d55d7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33158"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-196950": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:c1:40:2a:93:47",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a566bfdfd33a868cf61e5b18b36cbd55e9868f24cbb091e055ae606aeb8c6f03",
	                    "EndpointID": "452fe32bde0c42c4c35d700488ae93aeecc6c6a971ac6f1a8a492dbc4b328ed9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-196950",
	                        "d150aac7296d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-196950 -n functional-196950
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-196950 -n functional-196950: exit status 2 (313.614739ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                         │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons    │ functional-196950 addons list -o json                                                                                                               │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │ 06 Dec 25 11:05 UTC │
	│ ssh       │ functional-196950 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │                     │
	│ mount     │ -p functional-196950 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo46525363/001:/mount-9p --alsologtostderr -v=1                │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │                     │
	│ ssh       │ functional-196950 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │ 06 Dec 25 11:05 UTC │
	│ ssh       │ functional-196950 ssh -- ls -la /mount-9p                                                                                                           │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │ 06 Dec 25 11:05 UTC │
	│ ssh       │ functional-196950 ssh cat /mount-9p/test-1765019135320467161                                                                                        │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │ 06 Dec 25 11:05 UTC │
	│ ssh       │ functional-196950 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                    │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │                     │
	│ ssh       │ functional-196950 ssh sudo umount -f /mount-9p                                                                                                      │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │ 06 Dec 25 11:05 UTC │
	│ mount     │ -p functional-196950 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1854477046/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │                     │
	│ ssh       │ functional-196950 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │                     │
	│ ssh       │ functional-196950 ssh findmnt -T /mount-9p | grep 9p                                                                                                │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │ 06 Dec 25 11:05 UTC │
	│ ssh       │ functional-196950 ssh -- ls -la /mount-9p                                                                                                           │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │ 06 Dec 25 11:05 UTC │
	│ ssh       │ functional-196950 ssh sudo umount -f /mount-9p                                                                                                      │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │                     │
	│ mount     │ -p functional-196950 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo571722621/001:/mount2 --alsologtostderr -v=1                 │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │                     │
	│ mount     │ -p functional-196950 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo571722621/001:/mount3 --alsologtostderr -v=1                 │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │                     │
	│ mount     │ -p functional-196950 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo571722621/001:/mount1 --alsologtostderr -v=1                 │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │                     │
	│ ssh       │ functional-196950 ssh findmnt -T /mount1                                                                                                            │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │                     │
	│ ssh       │ functional-196950 ssh findmnt -T /mount1                                                                                                            │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │ 06 Dec 25 11:05 UTC │
	│ ssh       │ functional-196950 ssh findmnt -T /mount2                                                                                                            │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │ 06 Dec 25 11:05 UTC │
	│ ssh       │ functional-196950 ssh findmnt -T /mount3                                                                                                            │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │ 06 Dec 25 11:05 UTC │
	│ mount     │ -p functional-196950 --kill=true                                                                                                                    │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │                     │
	│ start     │ -p functional-196950 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0       │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │                     │
	│ start     │ -p functional-196950 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                 │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │                     │
	│ start     │ -p functional-196950 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0       │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-196950 --alsologtostderr -v=1                                                                                      │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 11:05:41
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 11:05:41.957197  423975 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:05:41.957372  423975 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:05:41.957399  423975 out.go:374] Setting ErrFile to fd 2...
	I1206 11:05:41.957418  423975 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:05:41.957813  423975 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 11:05:41.958250  423975 out.go:368] Setting JSON to false
	I1206 11:05:41.959202  423975 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10093,"bootTime":1765009049,"procs":164,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 11:05:41.959273  423975 start.go:143] virtualization:  
	I1206 11:05:41.962659  423975 out.go:179] * [functional-196950] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1206 11:05:41.965760  423975 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 11:05:41.965811  423975 notify.go:221] Checking for updates...
	I1206 11:05:41.971719  423975 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 11:05:41.974704  423975 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 11:05:41.977639  423975 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-362985/.minikube
	I1206 11:05:41.980611  423975 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 11:05:41.983549  423975 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 11:05:41.986961  423975 config.go:182] Loaded profile config "functional-196950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 11:05:41.987685  423975 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 11:05:42.035258  423975 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 11:05:42.035460  423975 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:05:42.103002  423975 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:05:42.09106596 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:05:42.103130  423975 docker.go:319] overlay module found
	I1206 11:05:42.106615  423975 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1206 11:05:42.118016  423975 start.go:309] selected driver: docker
	I1206 11:05:42.118069  423975 start.go:927] validating driver "docker" against &{Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:05:42.118200  423975 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 11:05:42.128616  423975 out.go:203] 
	W1206 11:05:42.139461  423975 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1206 11:05:42.143337  423975 out.go:203] 
	
	
	==> CRI-O <==
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.084499828Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=f0fd0946-b323-435f-946c-e412850eb9c0 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.085495997Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=34b6bb47-44a4-4780-9567-04c497973fa7 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.08608111Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=e26253f3-5094-4fbe-b6d1-306f2e31fa9a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.086661177Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=f8a4ffa3-a2d3-4c05-ba97-fd167ad1ff4e name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.087187984Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=43819321-fbd2-4155-9f0b-c716c27fc9ce name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.087957288Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=aeb2085e-c4e7-4d42-9049-5042f515cbdb name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.088481658Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=0f01a640-f6a6-41dd-afc3-f5cae208f89a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.079725515Z" level=info msg="Checking image status: kicbase/echo-server:functional-196950" id=f87ee22d-4408-46c2-8930-0ab8ba7ffa52 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.079908966Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.079954112Z" level=info msg="Image kicbase/echo-server:functional-196950 not found" id=f87ee22d-4408-46c2-8930-0ab8ba7ffa52 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.080016438Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-196950 found" id=f87ee22d-4408-46c2-8930-0ab8ba7ffa52 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.114067912Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-196950" id=da9dda5e-17fc-43e5-a93c-9057adb4fa98 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.114216122Z" level=info msg="Image docker.io/kicbase/echo-server:functional-196950 not found" id=da9dda5e-17fc-43e5-a93c-9057adb4fa98 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.114253587Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-196950 found" id=da9dda5e-17fc-43e5-a93c-9057adb4fa98 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.143186327Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-196950" id=629f3d91-fd66-4652-88b6-cbe010464984 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.143320859Z" level=info msg="Image localhost/kicbase/echo-server:functional-196950 not found" id=629f3d91-fd66-4652-88b6-cbe010464984 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.143363608Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-196950 found" id=629f3d91-fd66-4652-88b6-cbe010464984 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:52 functional-196950 crio[9931]: time="2025-12-06T11:03:52.005475943Z" level=info msg="Checking image status: kicbase/echo-server:functional-196950" id=c24c3057-b5d3-4f9d-ac3b-5bbc48ff2411 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:52 functional-196950 crio[9931]: time="2025-12-06T11:03:52.005746649Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 06 11:03:52 functional-196950 crio[9931]: time="2025-12-06T11:03:52.005815245Z" level=info msg="Image kicbase/echo-server:functional-196950 not found" id=c24c3057-b5d3-4f9d-ac3b-5bbc48ff2411 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:52 functional-196950 crio[9931]: time="2025-12-06T11:03:52.005906667Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-196950 found" id=c24c3057-b5d3-4f9d-ac3b-5bbc48ff2411 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:52 functional-196950 crio[9931]: time="2025-12-06T11:03:52.062141238Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-196950" id=2c21deca-fd70-4153-95f4-22abbc99a70a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:52 functional-196950 crio[9931]: time="2025-12-06T11:03:52.062283549Z" level=info msg="Image docker.io/kicbase/echo-server:functional-196950 not found" id=2c21deca-fd70-4153-95f4-22abbc99a70a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:52 functional-196950 crio[9931]: time="2025-12-06T11:03:52.062324296Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-196950 found" id=2c21deca-fd70-4153-95f4-22abbc99a70a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:52 functional-196950 crio[9931]: time="2025-12-06T11:03:52.089472813Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-196950" id=a99d8ece-a491-4b6e-b578-7c4c50168ae2 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:05:43.499211   23939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:05:43.499862   23939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:05:43.501445   23939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:05:43.501937   23939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:05:43.503598   23939 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014752] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503231] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.065820] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.901896] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.944603] kauditd_printk_skb: 39 callbacks suppressed
	[Dec 6 09:04] hrtimer: interrupt took 32230230 ns
	[Dec 6 10:25] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 6 10:26] overlayfs: idmapped layers are currently not supported
	[  +0.066821] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 6 10:32] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:33] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:51] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 11:05:43 up  2:48,  0 user,  load average: 0.60, 0.41, 0.54
	Linux functional-196950 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 11:05:41 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:05:41 functional-196950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1966.
	Dec 06 11:05:41 functional-196950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:05:41 functional-196950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:05:41 functional-196950 kubelet[23825]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:05:41 functional-196950 kubelet[23825]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:05:41 functional-196950 kubelet[23825]: E1206 11:05:41.806179   23825 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:05:41 functional-196950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:05:41 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:05:42 functional-196950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1967.
	Dec 06 11:05:42 functional-196950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:05:42 functional-196950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:05:42 functional-196950 kubelet[23839]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:05:42 functional-196950 kubelet[23839]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:05:42 functional-196950 kubelet[23839]: E1206 11:05:42.516215   23839 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:05:42 functional-196950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:05:42 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:05:43 functional-196950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1968.
	Dec 06 11:05:43 functional-196950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:05:43 functional-196950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:05:43 functional-196950 kubelet[23886]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:05:43 functional-196950 kubelet[23886]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:05:43 functional-196950 kubelet[23886]: E1206 11:05:43.281238   23886 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:05:43 functional-196950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:05:43 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-196950 -n functional-196950
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-196950 -n functional-196950: exit status 2 (328.733434ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-196950" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.78s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-196950 status: exit status 2 (316.425662ms)

                                                
                                                
-- stdout --
	functional-196950
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-arm64 -p functional-196950 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-196950 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (341.805909ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Stopped,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-arm64 -p functional-196950 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-196950 status -o json: exit status 2 (327.908172ms)

                                                
                                                
-- stdout --
	{"Name":"functional-196950","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-arm64 -p functional-196950 status -o json" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-196950
helpers_test.go:243: (dbg) docker inspect functional-196950:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1",
	        "Created": "2025-12-06T10:36:45.201779678Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 393848,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T10:36:45.318229053Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/hostname",
	        "HostsPath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/hosts",
	        "LogPath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1-json.log",
	        "Name": "/functional-196950",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-196950:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-196950",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1",
	                "LowerDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1-init/diff:/var/lib/docker/overlay2/5011226d55616c9977b14c1fe617d1302fe59373df05ce8ec6e21b79143a1c57/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-196950",
	                "Source": "/var/lib/docker/volumes/functional-196950/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-196950",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-196950",
	                "name.minikube.sigs.k8s.io": "functional-196950",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9b8f961d55d7529aed7b841f2ac9f818c22ff12b8ad73f2d6bcee22656d9749a",
	            "SandboxKey": "/var/run/docker/netns/9b8f961d55d7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33158"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-196950": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:c1:40:2a:93:47",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a566bfdfd33a868cf61e5b18b36cbd55e9868f24cbb091e055ae606aeb8c6f03",
	                    "EndpointID": "452fe32bde0c42c4c35d700488ae93aeecc6c6a971ac6f1a8a492dbc4b328ed9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-196950",
	                        "d150aac7296d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-196950 -n functional-196950
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-196950 -n functional-196950: exit status 2 (329.006266ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-196950 image ls                                                                                                                                │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │ 06 Dec 25 11:03 UTC │
	│ ssh     │ functional-196950 ssh sudo cat /usr/share/ca-certificates/364855.pem                                                                                      │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │ 06 Dec 25 11:03 UTC │
	│ image   │ functional-196950 image save kicbase/echo-server:functional-196950 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │ 06 Dec 25 11:03 UTC │
	│ ssh     │ functional-196950 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │ 06 Dec 25 11:03 UTC │
	│ image   │ functional-196950 image rm kicbase/echo-server:functional-196950 --alsologtostderr                                                                        │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │ 06 Dec 25 11:03 UTC │
	│ ssh     │ functional-196950 ssh sudo cat /etc/ssl/certs/3648552.pem                                                                                                 │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │ 06 Dec 25 11:03 UTC │
	│ ssh     │ functional-196950 ssh sudo cat /usr/share/ca-certificates/3648552.pem                                                                                     │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │ 06 Dec 25 11:03 UTC │
	│ image   │ functional-196950 image ls                                                                                                                                │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │ 06 Dec 25 11:03 UTC │
	│ image   │ functional-196950 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │ 06 Dec 25 11:03 UTC │
	│ ssh     │ functional-196950 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │ 06 Dec 25 11:03 UTC │
	│ ssh     │ functional-196950 ssh sudo cat /etc/test/nested/copy/364855/hosts                                                                                         │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │ 06 Dec 25 11:03 UTC │
	│ image   │ functional-196950 image ls                                                                                                                                │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │ 06 Dec 25 11:03 UTC │
	│ service │ functional-196950 service list                                                                                                                            │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │                     │
	│ image   │ functional-196950 image save --daemon kicbase/echo-server:functional-196950 --alsologtostderr                                                             │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │ 06 Dec 25 11:03 UTC │
	│ service │ functional-196950 service list -o json                                                                                                                    │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │                     │
	│ ssh     │ functional-196950 ssh echo hello                                                                                                                          │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │ 06 Dec 25 11:03 UTC │
	│ service │ functional-196950 service --namespace=default --https --url hello-node                                                                                    │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │                     │
	│ ssh     │ functional-196950 ssh cat /etc/hostname                                                                                                                   │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │ 06 Dec 25 11:03 UTC │
	│ service │ functional-196950 service hello-node --url --format={{.IP}}                                                                                               │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │                     │
	│ tunnel  │ functional-196950 tunnel --alsologtostderr                                                                                                                │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │                     │
	│ tunnel  │ functional-196950 tunnel --alsologtostderr                                                                                                                │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │                     │
	│ service │ functional-196950 service hello-node --url                                                                                                                │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │                     │
	│ tunnel  │ functional-196950 tunnel --alsologtostderr                                                                                                                │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │                     │
	│ addons  │ functional-196950 addons list                                                                                                                             │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │ 06 Dec 25 11:05 UTC │
	│ addons  │ functional-196950 addons list -o json                                                                                                                     │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │ 06 Dec 25 11:05 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:51:25
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:51:25.658528  405191 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:51:25.659862  405191 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:51:25.659873  405191 out.go:374] Setting ErrFile to fd 2...
	I1206 10:51:25.659879  405191 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:51:25.660272  405191 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 10:51:25.660784  405191 out.go:368] Setting JSON to false
	I1206 10:51:25.661671  405191 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9237,"bootTime":1765009049,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 10:51:25.661825  405191 start.go:143] virtualization:  
	I1206 10:51:25.665170  405191 out.go:179] * [functional-196950] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:51:25.668974  405191 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 10:51:25.669057  405191 notify.go:221] Checking for updates...
	I1206 10:51:25.674658  405191 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:51:25.677504  405191 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 10:51:25.680242  405191 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-362985/.minikube
	I1206 10:51:25.683061  405191 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:51:25.685807  405191 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:51:25.689056  405191 config.go:182] Loaded profile config "functional-196950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:51:25.689150  405191 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:51:25.719603  405191 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:51:25.719706  405191 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:51:25.776170  405191 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-06 10:51:25.766414658 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:51:25.776279  405191 docker.go:319] overlay module found
	I1206 10:51:25.779319  405191 out.go:179] * Using the docker driver based on existing profile
	I1206 10:51:25.782157  405191 start.go:309] selected driver: docker
	I1206 10:51:25.782168  405191 start.go:927] validating driver "docker" against &{Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:51:25.782268  405191 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:51:25.782379  405191 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:51:25.843232  405191 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-06 10:51:25.834027648 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:51:25.843742  405191 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 10:51:25.843762  405191 cni.go:84] Creating CNI manager for ""
	I1206 10:51:25.843817  405191 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:51:25.843868  405191 start.go:353] cluster config:
	{Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:51:25.846980  405191 out.go:179] * Starting "functional-196950" primary control-plane node in "functional-196950" cluster
	I1206 10:51:25.849840  405191 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 10:51:25.852721  405191 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:51:25.855512  405191 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 10:51:25.855549  405191 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1206 10:51:25.855557  405191 cache.go:65] Caching tarball of preloaded images
	I1206 10:51:25.855585  405191 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:51:25.855649  405191 preload.go:238] Found /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1206 10:51:25.855670  405191 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1206 10:51:25.855775  405191 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/config.json ...
	I1206 10:51:25.875281  405191 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 10:51:25.875292  405191 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 10:51:25.875312  405191 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:51:25.875342  405191 start.go:360] acquireMachinesLock for functional-196950: {Name:mkd2471f275d1d2a438cb4ce89f1d1521a0fb340 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:51:25.875462  405191 start.go:364] duration metric: took 100.145µs to acquireMachinesLock for "functional-196950"
	I1206 10:51:25.875483  405191 start.go:96] Skipping create...Using existing machine configuration
	I1206 10:51:25.875487  405191 fix.go:54] fixHost starting: 
	I1206 10:51:25.875763  405191 cli_runner.go:164] Run: docker container inspect functional-196950 --format={{.State.Status}}
	I1206 10:51:25.893454  405191 fix.go:112] recreateIfNeeded on functional-196950: state=Running err=<nil>
	W1206 10:51:25.893482  405191 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 10:51:25.896578  405191 out.go:252] * Updating the running docker "functional-196950" container ...
	I1206 10:51:25.896608  405191 machine.go:94] provisionDockerMachine start ...
	I1206 10:51:25.896697  405191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:51:25.913940  405191 main.go:143] libmachine: Using SSH client type: native
	I1206 10:51:25.914320  405191 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1206 10:51:25.914327  405191 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:51:26.075155  405191 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-196950
	
	I1206 10:51:26.075169  405191 ubuntu.go:182] provisioning hostname "functional-196950"
	I1206 10:51:26.075252  405191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:51:26.094744  405191 main.go:143] libmachine: Using SSH client type: native
	I1206 10:51:26.095070  405191 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1206 10:51:26.095080  405191 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-196950 && echo "functional-196950" | sudo tee /etc/hostname
	I1206 10:51:26.261114  405191 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-196950
	
	I1206 10:51:26.261197  405191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:51:26.279848  405191 main.go:143] libmachine: Using SSH client type: native
	I1206 10:51:26.280166  405191 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1206 10:51:26.280180  405191 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-196950' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-196950/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-196950' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:51:26.431933  405191 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:51:26.431953  405191 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-362985/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-362985/.minikube}
	I1206 10:51:26.431971  405191 ubuntu.go:190] setting up certificates
	I1206 10:51:26.431995  405191 provision.go:84] configureAuth start
	I1206 10:51:26.432056  405191 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-196950
	I1206 10:51:26.450343  405191 provision.go:143] copyHostCerts
	I1206 10:51:26.450415  405191 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem, removing ...
	I1206 10:51:26.450432  405191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem
	I1206 10:51:26.450505  405191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem (1082 bytes)
	I1206 10:51:26.450607  405191 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem, removing ...
	I1206 10:51:26.450611  405191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem
	I1206 10:51:26.450636  405191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem (1123 bytes)
	I1206 10:51:26.450689  405191 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem, removing ...
	I1206 10:51:26.450693  405191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem
	I1206 10:51:26.450714  405191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem (1679 bytes)
	I1206 10:51:26.450755  405191 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem org=jenkins.functional-196950 san=[127.0.0.1 192.168.49.2 functional-196950 localhost minikube]
	I1206 10:51:26.540911  405191 provision.go:177] copyRemoteCerts
	I1206 10:51:26.540967  405191 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:51:26.541011  405191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:51:26.559000  405191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:51:26.664415  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 10:51:26.682850  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 10:51:26.700635  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 10:51:26.720260  405191 provision.go:87] duration metric: took 288.251554ms to configureAuth
	I1206 10:51:26.720277  405191 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:51:26.720482  405191 config.go:182] Loaded profile config "functional-196950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:51:26.720577  405191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:51:26.740294  405191 main.go:143] libmachine: Using SSH client type: native
	I1206 10:51:26.740607  405191 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1206 10:51:26.740618  405191 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 10:51:27.107160  405191 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 10:51:27.107175  405191 machine.go:97] duration metric: took 1.210560762s to provisionDockerMachine
	I1206 10:51:27.107185  405191 start.go:293] postStartSetup for "functional-196950" (driver="docker")
	I1206 10:51:27.107196  405191 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:51:27.107253  405191 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:51:27.107294  405191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:51:27.129039  405191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:51:27.236148  405191 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:51:27.240016  405191 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:51:27.240036  405191 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:51:27.240047  405191 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-362985/.minikube/addons for local assets ...
	I1206 10:51:27.240125  405191 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-362985/.minikube/files for local assets ...
	I1206 10:51:27.240216  405191 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem -> 3648552.pem in /etc/ssl/certs
	I1206 10:51:27.240311  405191 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/test/nested/copy/364855/hosts -> hosts in /etc/test/nested/copy/364855
	I1206 10:51:27.240389  405191 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/364855
	I1206 10:51:27.248525  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem --> /etc/ssl/certs/3648552.pem (1708 bytes)
	I1206 10:51:27.267246  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/test/nested/copy/364855/hosts --> /etc/test/nested/copy/364855/hosts (40 bytes)
	I1206 10:51:27.285080  405191 start.go:296] duration metric: took 177.880099ms for postStartSetup
	I1206 10:51:27.285152  405191 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:51:27.285189  405191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:51:27.302563  405191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:51:27.404400  405191 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:51:27.408968  405191 fix.go:56] duration metric: took 1.533473357s for fixHost
	I1206 10:51:27.408984  405191 start.go:83] releasing machines lock for "functional-196950", held for 1.533513702s
	I1206 10:51:27.409052  405191 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-196950
	I1206 10:51:27.427444  405191 ssh_runner.go:195] Run: cat /version.json
	I1206 10:51:27.427475  405191 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:51:27.427488  405191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:51:27.427532  405191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:51:27.449136  405191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:51:27.450292  405191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:51:27.555364  405191 ssh_runner.go:195] Run: systemctl --version
	I1206 10:51:27.645936  405191 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 10:51:27.683240  405191 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 10:51:27.687562  405191 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:51:27.687626  405191 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:51:27.695460  405191 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 10:51:27.695474  405191 start.go:496] detecting cgroup driver to use...
	I1206 10:51:27.695505  405191 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:51:27.695551  405191 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 10:51:27.711018  405191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 10:51:27.724651  405191 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:51:27.724707  405191 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:51:27.740806  405191 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:51:27.754100  405191 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:51:27.883046  405191 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:51:28.013378  405191 docker.go:234] disabling docker service ...
	I1206 10:51:28.013440  405191 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:51:28.030310  405191 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:51:28.044424  405191 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:51:28.162200  405191 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:51:28.315775  405191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:51:28.333888  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:51:28.350625  405191 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 10:51:28.350700  405191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:51:28.360184  405191 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 10:51:28.360243  405191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:51:28.369224  405191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:51:28.378656  405191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:51:28.387862  405191 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:51:28.396244  405191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:51:28.405446  405191 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:51:28.414057  405191 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:51:28.423226  405191 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:51:28.430865  405191 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:51:28.438644  405191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:51:28.553737  405191 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 10:51:28.722710  405191 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 10:51:28.722782  405191 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 10:51:28.727796  405191 start.go:564] Will wait 60s for crictl version
	I1206 10:51:28.727854  405191 ssh_runner.go:195] Run: which crictl
	I1206 10:51:28.731603  405191 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:51:28.757634  405191 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 10:51:28.757708  405191 ssh_runner.go:195] Run: crio --version
	I1206 10:51:28.786864  405191 ssh_runner.go:195] Run: crio --version
	I1206 10:51:28.819624  405191 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1206 10:51:28.822438  405191 cli_runner.go:164] Run: docker network inspect functional-196950 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:51:28.838919  405191 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 10:51:28.845850  405191 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1206 10:51:28.848840  405191 kubeadm.go:884] updating cluster {Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:51:28.848980  405191 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 10:51:28.849059  405191 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:51:28.884770  405191 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:51:28.884782  405191 crio.go:433] Images already preloaded, skipping extraction
	I1206 10:51:28.884839  405191 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:51:28.911560  405191 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:51:28.911574  405191 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:51:28.911581  405191 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1206 10:51:28.911685  405191 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-196950 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:51:28.911771  405191 ssh_runner.go:195] Run: crio config
	I1206 10:51:28.966566  405191 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1206 10:51:28.966595  405191 cni.go:84] Creating CNI manager for ""
	I1206 10:51:28.966604  405191 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:51:28.966619  405191 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 10:51:28.966641  405191 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-196950 NodeName:functional-196950 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:51:28.966791  405191 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-196950"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:51:28.966870  405191 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 10:51:28.978798  405191 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:51:28.978872  405191 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:51:28.987304  405191 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1206 10:51:29.001847  405191 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 10:51:29.017577  405191 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1206 10:51:29.031751  405191 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:51:29.036513  405191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:51:29.155805  405191 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:51:29.722153  405191 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950 for IP: 192.168.49.2
	I1206 10:51:29.722163  405191 certs.go:195] generating shared ca certs ...
	I1206 10:51:29.722178  405191 certs.go:227] acquiring lock for ca certs: {Name:mke2ec61a37b6f3abbcbeb9abd23d6a19d011dd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:51:29.722312  405191 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.key
	I1206 10:51:29.722350  405191 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.key
	I1206 10:51:29.722357  405191 certs.go:257] generating profile certs ...
	I1206 10:51:29.722458  405191 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.key
	I1206 10:51:29.722506  405191 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.key.a77b39a6
	I1206 10:51:29.722550  405191 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.key
	I1206 10:51:29.722659  405191 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855.pem (1338 bytes)
	W1206 10:51:29.722686  405191 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855_empty.pem, impossibly tiny 0 bytes
	I1206 10:51:29.722693  405191 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 10:51:29.722721  405191 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem (1082 bytes)
	I1206 10:51:29.722747  405191 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:51:29.722776  405191 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem (1679 bytes)
	I1206 10:51:29.722816  405191 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem (1708 bytes)
	I1206 10:51:29.723422  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:51:29.745118  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 10:51:29.764772  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:51:29.783979  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 10:51:29.803249  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 10:51:29.821820  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 10:51:29.840052  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:51:29.858172  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 10:51:29.876447  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:51:29.894619  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855.pem --> /usr/share/ca-certificates/364855.pem (1338 bytes)
	I1206 10:51:29.912710  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem --> /usr/share/ca-certificates/3648552.pem (1708 bytes)
	I1206 10:51:29.930993  405191 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:51:29.944776  405191 ssh_runner.go:195] Run: openssl version
	I1206 10:51:29.951232  405191 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:51:29.958913  405191 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:51:29.966922  405191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:51:29.970672  405191 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:26 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:51:29.970730  405191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:51:30.016305  405191 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:51:30.031889  405191 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364855.pem
	I1206 10:51:30.048455  405191 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364855.pem /etc/ssl/certs/364855.pem
	I1206 10:51:30.063564  405191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364855.pem
	I1206 10:51:30.076207  405191 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:36 /usr/share/ca-certificates/364855.pem
	I1206 10:51:30.076271  405191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364855.pem
	I1206 10:51:30.128156  405191 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 10:51:30.136853  405191 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648552.pem
	I1206 10:51:30.146061  405191 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648552.pem /etc/ssl/certs/3648552.pem
	I1206 10:51:30.154785  405191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648552.pem
	I1206 10:51:30.159209  405191 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:36 /usr/share/ca-certificates/3648552.pem
	I1206 10:51:30.159296  405191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648552.pem
	I1206 10:51:30.202450  405191 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 10:51:30.210421  405191 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:51:30.214689  405191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 10:51:30.257294  405191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 10:51:30.301161  405191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 10:51:30.342552  405191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 10:51:30.384443  405191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 10:51:30.426153  405191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 10:51:30.467193  405191 kubeadm.go:401] StartCluster: {Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:51:30.467269  405191 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:51:30.467336  405191 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:51:30.505294  405191 cri.go:89] found id: ""
	I1206 10:51:30.505356  405191 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:51:30.514317  405191 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 10:51:30.514327  405191 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 10:51:30.514378  405191 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 10:51:30.522953  405191 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:51:30.523619  405191 kubeconfig.go:125] found "functional-196950" server: "https://192.168.49.2:8441"
	I1206 10:51:30.525284  405191 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 10:51:30.535655  405191 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-06 10:36:53.608460602 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-06 10:51:29.025529796 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1206 10:51:30.535667  405191 kubeadm.go:1161] stopping kube-system containers ...
	I1206 10:51:30.535679  405191 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1206 10:51:30.535750  405191 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:51:30.563289  405191 cri.go:89] found id: ""
	I1206 10:51:30.563367  405191 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 10:51:30.577669  405191 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:51:30.585599  405191 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec  6 10:40 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  6 10:40 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  6 10:40 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec  6 10:40 /etc/kubernetes/scheduler.conf
	
	I1206 10:51:30.585661  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:51:30.593607  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:51:30.601561  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:51:30.601615  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:51:30.609082  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:51:30.616706  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:51:30.616764  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:51:30.624576  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:51:30.632333  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:51:30.632396  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:51:30.640022  405191 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 10:51:30.648015  405191 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:51:30.694279  405191 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:51:31.789747  405191 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.095443049s)
	I1206 10:51:31.789807  405191 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:51:31.992373  405191 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:51:32.066243  405191 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:51:32.115098  405191 api_server.go:52] waiting for apiserver process to appear ...
	I1206 10:51:32.115193  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:32.616025  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:33.115328  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:33.615777  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:34.116234  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:34.616203  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:35.115628  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:35.616081  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:36.116020  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:36.616269  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:37.115484  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:37.615419  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:38.115405  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:38.615272  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:39.115398  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:39.615355  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:40.115498  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:40.615726  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:41.116068  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:41.615318  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:42.116188  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:42.615408  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:43.116174  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:43.616150  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:44.115863  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:44.616112  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:45.115433  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:45.615358  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:46.115254  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:46.615554  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:47.116219  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:47.615907  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:48.115484  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:48.615750  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:49.115717  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:49.615630  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:50.115975  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:50.615777  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:51.116004  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:51.615732  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:52.115255  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:52.616222  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:53.115944  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:53.616128  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:54.115370  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:54.616204  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:55.116093  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:55.616070  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:56.116312  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:56.616205  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:57.116056  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:57.616042  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:58.116102  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:58.616065  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:59.115989  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:59.615683  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:00.115492  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:00.616604  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:01.115972  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:01.615689  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:02.116000  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:02.615289  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:03.116299  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:03.615451  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:04.115353  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:04.615302  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:05.115836  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:05.616105  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:06.115987  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:06.615950  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:07.116145  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:07.615538  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:08.115408  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:08.616204  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:09.116054  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:09.615547  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:10.115395  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:10.616209  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:11.115978  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:11.616260  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:12.115320  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:12.616287  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:13.115459  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:13.615480  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:14.116026  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:14.615286  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:15.116132  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:15.615307  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:16.116269  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:16.616317  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:17.115290  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:17.615531  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:18.115402  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:18.615434  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:19.115328  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:19.615503  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:20.115398  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:20.616128  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:21.115363  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:21.615736  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:22.115418  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:22.616278  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:23.115418  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:23.616297  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:24.115428  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:24.615397  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:25.115674  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:25.615431  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:26.116295  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:26.615282  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:27.115737  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:27.615537  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:28.115556  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:28.615304  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:29.115439  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:29.615331  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:30.116125  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:30.615932  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:31.115423  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:31.616201  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:32.116069  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:32.116145  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:32.141390  405191 cri.go:89] found id: ""
	I1206 10:52:32.141404  405191 logs.go:282] 0 containers: []
	W1206 10:52:32.141411  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:32.141416  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:32.141473  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:32.166484  405191 cri.go:89] found id: ""
	I1206 10:52:32.166497  405191 logs.go:282] 0 containers: []
	W1206 10:52:32.166504  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:32.166509  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:32.166565  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:32.194996  405191 cri.go:89] found id: ""
	I1206 10:52:32.195009  405191 logs.go:282] 0 containers: []
	W1206 10:52:32.195016  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:32.195021  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:32.195076  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:32.221300  405191 cri.go:89] found id: ""
	I1206 10:52:32.221313  405191 logs.go:282] 0 containers: []
	W1206 10:52:32.221321  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:32.221326  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:32.221382  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:32.247157  405191 cri.go:89] found id: ""
	I1206 10:52:32.247171  405191 logs.go:282] 0 containers: []
	W1206 10:52:32.247178  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:32.247201  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:32.247261  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:32.272996  405191 cri.go:89] found id: ""
	I1206 10:52:32.273011  405191 logs.go:282] 0 containers: []
	W1206 10:52:32.273018  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:32.273023  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:32.273087  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:32.298872  405191 cri.go:89] found id: ""
	I1206 10:52:32.298885  405191 logs.go:282] 0 containers: []
	W1206 10:52:32.298892  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:32.298899  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:32.298909  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:32.365036  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:32.365056  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:32.380152  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:32.380168  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:32.448480  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:32.439513   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:32.440191   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:32.441917   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:32.442441   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:32.444184   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:32.439513   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:32.440191   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:32.441917   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:32.442441   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:32.444184   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:32.448508  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:32.448519  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:32.521363  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:32.521385  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:52:35.051557  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:35.061829  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:35.061887  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:35.090093  405191 cri.go:89] found id: ""
	I1206 10:52:35.090109  405191 logs.go:282] 0 containers: []
	W1206 10:52:35.090116  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:35.090123  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:35.090185  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:35.120692  405191 cri.go:89] found id: ""
	I1206 10:52:35.120706  405191 logs.go:282] 0 containers: []
	W1206 10:52:35.120713  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:35.120718  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:35.120781  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:35.150871  405191 cri.go:89] found id: ""
	I1206 10:52:35.150885  405191 logs.go:282] 0 containers: []
	W1206 10:52:35.150895  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:35.150901  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:35.150966  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:35.178176  405191 cri.go:89] found id: ""
	I1206 10:52:35.178189  405191 logs.go:282] 0 containers: []
	W1206 10:52:35.178196  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:35.178201  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:35.178259  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:35.203836  405191 cri.go:89] found id: ""
	I1206 10:52:35.203851  405191 logs.go:282] 0 containers: []
	W1206 10:52:35.203858  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:35.203864  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:35.203922  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:35.229838  405191 cri.go:89] found id: ""
	I1206 10:52:35.229852  405191 logs.go:282] 0 containers: []
	W1206 10:52:35.229860  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:35.229865  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:35.229923  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:35.255728  405191 cri.go:89] found id: ""
	I1206 10:52:35.255742  405191 logs.go:282] 0 containers: []
	W1206 10:52:35.255749  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:35.255763  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:35.255774  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:35.326293  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:35.326313  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:35.341587  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:35.341603  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:35.406128  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:35.396962   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:35.397407   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:35.399334   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:35.399842   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:35.401729   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:35.396962   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:35.397407   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:35.399334   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:35.399842   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:35.401729   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:35.406138  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:35.406148  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:35.477539  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:35.477561  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:52:38.012461  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:38.026662  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:38.026746  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:38.057501  405191 cri.go:89] found id: ""
	I1206 10:52:38.057514  405191 logs.go:282] 0 containers: []
	W1206 10:52:38.057522  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:38.057527  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:38.057597  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:38.087721  405191 cri.go:89] found id: ""
	I1206 10:52:38.087736  405191 logs.go:282] 0 containers: []
	W1206 10:52:38.087744  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:38.087750  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:38.087812  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:38.115539  405191 cri.go:89] found id: ""
	I1206 10:52:38.115553  405191 logs.go:282] 0 containers: []
	W1206 10:52:38.115560  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:38.115566  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:38.115624  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:38.140812  405191 cri.go:89] found id: ""
	I1206 10:52:38.140826  405191 logs.go:282] 0 containers: []
	W1206 10:52:38.140833  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:38.140838  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:38.140896  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:38.166576  405191 cri.go:89] found id: ""
	I1206 10:52:38.166590  405191 logs.go:282] 0 containers: []
	W1206 10:52:38.166597  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:38.166602  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:38.166662  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:38.191851  405191 cri.go:89] found id: ""
	I1206 10:52:38.191864  405191 logs.go:282] 0 containers: []
	W1206 10:52:38.191871  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:38.191876  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:38.191933  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:38.217461  405191 cri.go:89] found id: ""
	I1206 10:52:38.217475  405191 logs.go:282] 0 containers: []
	W1206 10:52:38.217482  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:38.217490  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:38.217502  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:38.232449  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:38.232465  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:38.295220  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:38.286931   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:38.287615   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:38.289268   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:38.289707   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:38.291283   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:38.286931   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:38.287615   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:38.289268   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:38.289707   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:38.291283   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:38.295242  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:38.295255  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:38.363789  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:38.363809  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:52:38.393298  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:38.393313  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:40.963508  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:40.975400  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:40.975471  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:41.012381  405191 cri.go:89] found id: ""
	I1206 10:52:41.012396  405191 logs.go:282] 0 containers: []
	W1206 10:52:41.012403  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:41.012409  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:41.012481  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:41.045820  405191 cri.go:89] found id: ""
	I1206 10:52:41.045833  405191 logs.go:282] 0 containers: []
	W1206 10:52:41.045840  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:41.045845  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:41.045905  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:41.072220  405191 cri.go:89] found id: ""
	I1206 10:52:41.072234  405191 logs.go:282] 0 containers: []
	W1206 10:52:41.072241  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:41.072246  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:41.072315  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:41.099263  405191 cri.go:89] found id: ""
	I1206 10:52:41.099289  405191 logs.go:282] 0 containers: []
	W1206 10:52:41.099297  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:41.099302  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:41.099400  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:41.125321  405191 cri.go:89] found id: ""
	I1206 10:52:41.125335  405191 logs.go:282] 0 containers: []
	W1206 10:52:41.125342  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:41.125347  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:41.125407  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:41.151976  405191 cri.go:89] found id: ""
	I1206 10:52:41.151991  405191 logs.go:282] 0 containers: []
	W1206 10:52:41.151998  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:41.152004  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:41.152071  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:41.182220  405191 cri.go:89] found id: ""
	I1206 10:52:41.182246  405191 logs.go:282] 0 containers: []
	W1206 10:52:41.182254  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:41.182262  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:41.182276  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:41.248526  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:41.239066   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:41.239904   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:41.241505   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:41.242002   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:41.243768   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:41.239066   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:41.239904   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:41.241505   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:41.242002   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:41.243768   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:41.248580  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:41.248592  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:41.318224  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:41.318245  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:52:41.351350  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:41.351366  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:41.419147  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:41.419175  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:43.934479  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:43.945219  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:43.945319  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:43.977434  405191 cri.go:89] found id: ""
	I1206 10:52:43.977447  405191 logs.go:282] 0 containers: []
	W1206 10:52:43.977455  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:43.977460  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:43.977521  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:44.023455  405191 cri.go:89] found id: ""
	I1206 10:52:44.023469  405191 logs.go:282] 0 containers: []
	W1206 10:52:44.023476  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:44.023481  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:44.023547  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:44.054515  405191 cri.go:89] found id: ""
	I1206 10:52:44.054528  405191 logs.go:282] 0 containers: []
	W1206 10:52:44.054535  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:44.054542  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:44.054606  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:44.081078  405191 cri.go:89] found id: ""
	I1206 10:52:44.081092  405191 logs.go:282] 0 containers: []
	W1206 10:52:44.081100  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:44.081105  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:44.081169  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:44.107423  405191 cri.go:89] found id: ""
	I1206 10:52:44.107437  405191 logs.go:282] 0 containers: []
	W1206 10:52:44.107451  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:44.107456  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:44.107514  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:44.134813  405191 cri.go:89] found id: ""
	I1206 10:52:44.134827  405191 logs.go:282] 0 containers: []
	W1206 10:52:44.134834  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:44.134839  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:44.134901  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:44.160796  405191 cri.go:89] found id: ""
	I1206 10:52:44.160816  405191 logs.go:282] 0 containers: []
	W1206 10:52:44.160824  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:44.160831  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:44.160842  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:52:44.190778  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:44.190796  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:44.257562  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:44.257581  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:44.272647  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:44.272663  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:44.338023  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:44.329392   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:44.330156   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:44.331823   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:44.332332   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:44.333956   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:44.329392   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:44.330156   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:44.331823   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:44.332332   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:44.333956   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:44.338033  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:44.338043  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:46.906964  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:46.917503  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:46.917559  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:46.949168  405191 cri.go:89] found id: ""
	I1206 10:52:46.949182  405191 logs.go:282] 0 containers: []
	W1206 10:52:46.949189  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:46.949194  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:46.949253  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:46.981111  405191 cri.go:89] found id: ""
	I1206 10:52:46.981124  405191 logs.go:282] 0 containers: []
	W1206 10:52:46.981131  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:46.981136  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:46.981196  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:47.022951  405191 cri.go:89] found id: ""
	I1206 10:52:47.022965  405191 logs.go:282] 0 containers: []
	W1206 10:52:47.022972  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:47.022977  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:47.023037  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:47.052856  405191 cri.go:89] found id: ""
	I1206 10:52:47.052870  405191 logs.go:282] 0 containers: []
	W1206 10:52:47.052886  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:47.052891  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:47.052967  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:47.083787  405191 cri.go:89] found id: ""
	I1206 10:52:47.083800  405191 logs.go:282] 0 containers: []
	W1206 10:52:47.083807  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:47.083813  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:47.083870  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:47.109033  405191 cri.go:89] found id: ""
	I1206 10:52:47.109046  405191 logs.go:282] 0 containers: []
	W1206 10:52:47.109054  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:47.109059  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:47.109115  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:47.139758  405191 cri.go:89] found id: ""
	I1206 10:52:47.139772  405191 logs.go:282] 0 containers: []
	W1206 10:52:47.139779  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:47.139788  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:47.139798  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:47.154866  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:47.154884  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:47.221813  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:47.213688   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:47.214230   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:47.215830   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:47.216327   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:47.217906   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:47.213688   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:47.214230   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:47.215830   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:47.216327   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:47.217906   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:47.221824  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:47.221835  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:47.290233  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:47.290253  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:52:47.321014  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:47.321036  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:49.890726  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:49.902627  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:49.902688  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:49.929201  405191 cri.go:89] found id: ""
	I1206 10:52:49.929215  405191 logs.go:282] 0 containers: []
	W1206 10:52:49.929224  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:49.929230  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:49.929290  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:49.956185  405191 cri.go:89] found id: ""
	I1206 10:52:49.956198  405191 logs.go:282] 0 containers: []
	W1206 10:52:49.956205  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:49.956210  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:49.956269  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:49.993314  405191 cri.go:89] found id: ""
	I1206 10:52:49.993329  405191 logs.go:282] 0 containers: []
	W1206 10:52:49.993336  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:49.993343  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:49.993403  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:50.037379  405191 cri.go:89] found id: ""
	I1206 10:52:50.037395  405191 logs.go:282] 0 containers: []
	W1206 10:52:50.037403  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:50.037409  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:50.037472  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:50.067336  405191 cri.go:89] found id: ""
	I1206 10:52:50.067351  405191 logs.go:282] 0 containers: []
	W1206 10:52:50.067358  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:50.067363  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:50.067469  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:50.094997  405191 cri.go:89] found id: ""
	I1206 10:52:50.095010  405191 logs.go:282] 0 containers: []
	W1206 10:52:50.095018  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:50.095023  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:50.095087  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:50.122233  405191 cri.go:89] found id: ""
	I1206 10:52:50.122247  405191 logs.go:282] 0 containers: []
	W1206 10:52:50.122254  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:50.122262  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:50.122274  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:50.137790  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:50.137811  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:50.201020  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:50.192768   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:50.193599   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:50.195170   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:50.195719   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:50.197320   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:50.192768   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:50.193599   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:50.195170   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:50.195719   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:50.197320   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:50.201031  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:50.201041  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:50.275122  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:50.275142  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:52:50.303756  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:50.303777  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:52.872285  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:52.882349  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:52.882406  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:52.911618  405191 cri.go:89] found id: ""
	I1206 10:52:52.911631  405191 logs.go:282] 0 containers: []
	W1206 10:52:52.911638  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:52.911644  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:52.911705  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:52.937062  405191 cri.go:89] found id: ""
	I1206 10:52:52.937077  405191 logs.go:282] 0 containers: []
	W1206 10:52:52.937084  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:52.937089  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:52.937149  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:52.963326  405191 cri.go:89] found id: ""
	I1206 10:52:52.963340  405191 logs.go:282] 0 containers: []
	W1206 10:52:52.963347  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:52.963352  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:52.963437  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:52.997061  405191 cri.go:89] found id: ""
	I1206 10:52:52.997074  405191 logs.go:282] 0 containers: []
	W1206 10:52:52.997081  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:52.997086  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:52.997149  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:53.035456  405191 cri.go:89] found id: ""
	I1206 10:52:53.035469  405191 logs.go:282] 0 containers: []
	W1206 10:52:53.035477  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:53.035483  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:53.035543  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:53.063687  405191 cri.go:89] found id: ""
	I1206 10:52:53.063700  405191 logs.go:282] 0 containers: []
	W1206 10:52:53.063707  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:53.063712  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:53.063770  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:53.089131  405191 cri.go:89] found id: ""
	I1206 10:52:53.089145  405191 logs.go:282] 0 containers: []
	W1206 10:52:53.089152  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:53.089161  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:53.089180  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:53.154130  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:53.145768   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:53.146202   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:53.147939   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:53.148440   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:53.150128   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:53.145768   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:53.146202   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:53.147939   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:53.148440   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:53.150128   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:53.154142  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:53.154153  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:53.226211  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:53.226231  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:52:53.255876  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:53.255893  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:53.328864  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:53.328884  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:55.844855  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:55.855173  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:55.855232  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:55.882003  405191 cri.go:89] found id: ""
	I1206 10:52:55.882016  405191 logs.go:282] 0 containers: []
	W1206 10:52:55.882037  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:55.882043  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:55.882102  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:55.906679  405191 cri.go:89] found id: ""
	I1206 10:52:55.906693  405191 logs.go:282] 0 containers: []
	W1206 10:52:55.906700  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:55.906705  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:55.906763  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:55.932742  405191 cri.go:89] found id: ""
	I1206 10:52:55.932756  405191 logs.go:282] 0 containers: []
	W1206 10:52:55.932763  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:55.932769  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:55.932830  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:55.959084  405191 cri.go:89] found id: ""
	I1206 10:52:55.959097  405191 logs.go:282] 0 containers: []
	W1206 10:52:55.959104  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:55.959109  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:55.959167  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:56.001438  405191 cri.go:89] found id: ""
	I1206 10:52:56.001453  405191 logs.go:282] 0 containers: []
	W1206 10:52:56.001461  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:56.001467  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:56.001540  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:56.039276  405191 cri.go:89] found id: ""
	I1206 10:52:56.039291  405191 logs.go:282] 0 containers: []
	W1206 10:52:56.039298  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:56.039304  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:56.039368  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:56.074083  405191 cri.go:89] found id: ""
	I1206 10:52:56.074097  405191 logs.go:282] 0 containers: []
	W1206 10:52:56.074104  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:56.074112  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:56.074124  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:56.148294  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:56.148320  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:56.163720  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:56.163740  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:56.231608  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:56.222271   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:56.222910   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:56.224621   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:56.225337   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:56.227055   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:56.222271   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:56.222910   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:56.224621   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:56.225337   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:56.227055   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:56.231633  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:56.231644  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:56.301348  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:56.301373  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:52:58.834132  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:58.844214  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:58.844271  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:58.871604  405191 cri.go:89] found id: ""
	I1206 10:52:58.871618  405191 logs.go:282] 0 containers: []
	W1206 10:52:58.871625  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:58.871630  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:58.871689  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:58.898243  405191 cri.go:89] found id: ""
	I1206 10:52:58.898257  405191 logs.go:282] 0 containers: []
	W1206 10:52:58.898264  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:58.898269  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:58.898325  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:58.921887  405191 cri.go:89] found id: ""
	I1206 10:52:58.921901  405191 logs.go:282] 0 containers: []
	W1206 10:52:58.921907  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:58.921913  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:58.921970  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:58.947546  405191 cri.go:89] found id: ""
	I1206 10:52:58.947563  405191 logs.go:282] 0 containers: []
	W1206 10:52:58.947570  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:58.947575  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:58.947645  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:58.976915  405191 cri.go:89] found id: ""
	I1206 10:52:58.976930  405191 logs.go:282] 0 containers: []
	W1206 10:52:58.976937  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:58.976942  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:58.977005  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:59.013936  405191 cri.go:89] found id: ""
	I1206 10:52:59.013949  405191 logs.go:282] 0 containers: []
	W1206 10:52:59.013956  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:59.013962  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:59.014020  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:59.044670  405191 cri.go:89] found id: ""
	I1206 10:52:59.044683  405191 logs.go:282] 0 containers: []
	W1206 10:52:59.044690  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:59.044698  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:59.044708  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:59.111552  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:59.111571  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:59.125917  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:59.125933  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:59.190341  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:59.182165   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:59.182776   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:59.184355   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:59.184805   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:59.186371   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:59.182165   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:59.182776   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:59.184355   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:59.184805   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:59.186371   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:59.190351  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:59.190362  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:59.258936  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:59.258957  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:01.790777  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:01.802470  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:01.802534  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:01.828331  405191 cri.go:89] found id: ""
	I1206 10:53:01.828345  405191 logs.go:282] 0 containers: []
	W1206 10:53:01.828352  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:01.828357  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:01.828415  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:01.853132  405191 cri.go:89] found id: ""
	I1206 10:53:01.853145  405191 logs.go:282] 0 containers: []
	W1206 10:53:01.853153  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:01.853158  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:01.853218  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:01.879034  405191 cri.go:89] found id: ""
	I1206 10:53:01.879048  405191 logs.go:282] 0 containers: []
	W1206 10:53:01.879055  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:01.879060  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:01.879119  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:01.905079  405191 cri.go:89] found id: ""
	I1206 10:53:01.905094  405191 logs.go:282] 0 containers: []
	W1206 10:53:01.905101  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:01.905106  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:01.905168  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:01.931029  405191 cri.go:89] found id: ""
	I1206 10:53:01.931043  405191 logs.go:282] 0 containers: []
	W1206 10:53:01.931050  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:01.931055  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:01.931115  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:01.958324  405191 cri.go:89] found id: ""
	I1206 10:53:01.958338  405191 logs.go:282] 0 containers: []
	W1206 10:53:01.958345  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:01.958351  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:01.958406  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:01.999570  405191 cri.go:89] found id: ""
	I1206 10:53:01.999583  405191 logs.go:282] 0 containers: []
	W1206 10:53:01.999590  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:01.999598  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:01.999613  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:02.075754  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:02.075775  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:02.091145  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:02.091168  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:02.166018  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:02.151211   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:02.151882   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:02.153563   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:02.154149   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:02.161209   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:02.151211   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:02.151882   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:02.153563   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:02.154149   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:02.161209   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:02.166029  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:02.166041  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:02.236832  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:02.236853  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:04.769770  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:04.780155  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:04.780230  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:04.805785  405191 cri.go:89] found id: ""
	I1206 10:53:04.805799  405191 logs.go:282] 0 containers: []
	W1206 10:53:04.805806  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:04.805811  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:04.805871  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:04.833423  405191 cri.go:89] found id: ""
	I1206 10:53:04.833445  405191 logs.go:282] 0 containers: []
	W1206 10:53:04.833452  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:04.833458  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:04.833523  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:04.859864  405191 cri.go:89] found id: ""
	I1206 10:53:04.859879  405191 logs.go:282] 0 containers: []
	W1206 10:53:04.859888  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:04.859895  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:04.859964  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:04.886417  405191 cri.go:89] found id: ""
	I1206 10:53:04.886431  405191 logs.go:282] 0 containers: []
	W1206 10:53:04.886437  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:04.886443  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:04.886503  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:04.912019  405191 cri.go:89] found id: ""
	I1206 10:53:04.912033  405191 logs.go:282] 0 containers: []
	W1206 10:53:04.912040  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:04.912044  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:04.912104  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:04.941901  405191 cri.go:89] found id: ""
	I1206 10:53:04.941915  405191 logs.go:282] 0 containers: []
	W1206 10:53:04.941922  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:04.941928  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:04.941990  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:04.967316  405191 cri.go:89] found id: ""
	I1206 10:53:04.967330  405191 logs.go:282] 0 containers: []
	W1206 10:53:04.967337  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:04.967344  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:04.967356  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:05.048268  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:05.048290  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:05.064282  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:05.064299  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:05.132111  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:05.123756   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:05.124563   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:05.126211   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:05.126545   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:05.128101   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:05.123756   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:05.124563   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:05.126211   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:05.126545   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:05.128101   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:05.132131  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:05.132142  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:05.202438  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:05.202460  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:07.731737  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:07.742255  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:07.742344  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:07.767645  405191 cri.go:89] found id: ""
	I1206 10:53:07.767659  405191 logs.go:282] 0 containers: []
	W1206 10:53:07.767666  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:07.767671  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:07.767730  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:07.793951  405191 cri.go:89] found id: ""
	I1206 10:53:07.793975  405191 logs.go:282] 0 containers: []
	W1206 10:53:07.793983  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:07.793989  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:07.794055  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:07.819683  405191 cri.go:89] found id: ""
	I1206 10:53:07.819699  405191 logs.go:282] 0 containers: []
	W1206 10:53:07.819705  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:07.819711  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:07.819784  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:07.851523  405191 cri.go:89] found id: ""
	I1206 10:53:07.851537  405191 logs.go:282] 0 containers: []
	W1206 10:53:07.851543  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:07.851549  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:07.851627  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:07.878807  405191 cri.go:89] found id: ""
	I1206 10:53:07.878831  405191 logs.go:282] 0 containers: []
	W1206 10:53:07.878838  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:07.878844  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:07.878915  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:07.911047  405191 cri.go:89] found id: ""
	I1206 10:53:07.911060  405191 logs.go:282] 0 containers: []
	W1206 10:53:07.911078  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:07.911084  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:07.911155  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:07.937042  405191 cri.go:89] found id: ""
	I1206 10:53:07.937064  405191 logs.go:282] 0 containers: []
	W1206 10:53:07.937072  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:07.937080  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:07.937091  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:08.004528  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:08.004551  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:08.026930  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:08.026947  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:08.109064  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:08.100555   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:08.101020   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:08.102569   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:08.102918   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:08.104386   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:08.100555   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:08.101020   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:08.102569   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:08.102918   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:08.104386   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:08.109086  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:08.109096  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:08.177486  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:08.177508  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:10.706543  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:10.717198  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:10.717262  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:10.743532  405191 cri.go:89] found id: ""
	I1206 10:53:10.743545  405191 logs.go:282] 0 containers: []
	W1206 10:53:10.743552  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:10.743557  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:10.743617  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:10.768882  405191 cri.go:89] found id: ""
	I1206 10:53:10.768897  405191 logs.go:282] 0 containers: []
	W1206 10:53:10.768903  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:10.768908  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:10.768966  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:10.798729  405191 cri.go:89] found id: ""
	I1206 10:53:10.798742  405191 logs.go:282] 0 containers: []
	W1206 10:53:10.798751  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:10.798756  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:10.798814  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:10.823956  405191 cri.go:89] found id: ""
	I1206 10:53:10.823971  405191 logs.go:282] 0 containers: []
	W1206 10:53:10.823978  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:10.823984  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:10.824054  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:10.849242  405191 cri.go:89] found id: ""
	I1206 10:53:10.849271  405191 logs.go:282] 0 containers: []
	W1206 10:53:10.849278  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:10.849283  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:10.849351  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:10.876058  405191 cri.go:89] found id: ""
	I1206 10:53:10.876071  405191 logs.go:282] 0 containers: []
	W1206 10:53:10.876078  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:10.876086  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:10.876145  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:10.901170  405191 cri.go:89] found id: ""
	I1206 10:53:10.901184  405191 logs.go:282] 0 containers: []
	W1206 10:53:10.901192  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:10.901199  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:10.901210  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:10.971362  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:10.971388  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:11.005981  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:11.006000  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:11.089894  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:11.089916  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:11.106328  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:11.106365  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:11.174633  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:11.166045   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:11.167001   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:11.168645   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:11.169014   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:11.170539   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:11.166045   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:11.167001   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:11.168645   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:11.169014   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:11.170539   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:13.674898  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:13.689619  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:13.689793  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:13.724853  405191 cri.go:89] found id: ""
	I1206 10:53:13.724867  405191 logs.go:282] 0 containers: []
	W1206 10:53:13.724874  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:13.724880  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:13.724939  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:13.751349  405191 cri.go:89] found id: ""
	I1206 10:53:13.751363  405191 logs.go:282] 0 containers: []
	W1206 10:53:13.751369  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:13.751402  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:13.751488  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:13.778380  405191 cri.go:89] found id: ""
	I1206 10:53:13.778395  405191 logs.go:282] 0 containers: []
	W1206 10:53:13.778402  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:13.778408  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:13.778474  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:13.806068  405191 cri.go:89] found id: ""
	I1206 10:53:13.806081  405191 logs.go:282] 0 containers: []
	W1206 10:53:13.806088  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:13.806093  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:13.806150  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:13.831347  405191 cri.go:89] found id: ""
	I1206 10:53:13.831360  405191 logs.go:282] 0 containers: []
	W1206 10:53:13.831367  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:13.831410  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:13.831494  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:13.856962  405191 cri.go:89] found id: ""
	I1206 10:53:13.856976  405191 logs.go:282] 0 containers: []
	W1206 10:53:13.856983  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:13.856994  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:13.857057  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:13.883227  405191 cri.go:89] found id: ""
	I1206 10:53:13.883241  405191 logs.go:282] 0 containers: []
	W1206 10:53:13.883248  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:13.883256  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:13.883268  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:13.912731  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:13.912749  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:13.981562  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:13.981581  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:13.997805  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:13.997822  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:14.076333  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:14.066553   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:14.067750   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:14.068525   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:14.070431   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:14.071166   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:14.066553   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:14.067750   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:14.068525   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:14.070431   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:14.071166   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:14.076343  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:14.076355  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:16.646007  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:16.656726  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:16.656822  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:16.682515  405191 cri.go:89] found id: ""
	I1206 10:53:16.682529  405191 logs.go:282] 0 containers: []
	W1206 10:53:16.682535  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:16.682541  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:16.682609  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:16.708327  405191 cri.go:89] found id: ""
	I1206 10:53:16.708341  405191 logs.go:282] 0 containers: []
	W1206 10:53:16.708359  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:16.708365  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:16.708433  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:16.744002  405191 cri.go:89] found id: ""
	I1206 10:53:16.744023  405191 logs.go:282] 0 containers: []
	W1206 10:53:16.744032  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:16.744037  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:16.744099  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:16.771487  405191 cri.go:89] found id: ""
	I1206 10:53:16.771501  405191 logs.go:282] 0 containers: []
	W1206 10:53:16.771509  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:16.771514  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:16.771594  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:16.799494  405191 cri.go:89] found id: ""
	I1206 10:53:16.799507  405191 logs.go:282] 0 containers: []
	W1206 10:53:16.799514  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:16.799520  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:16.799595  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:16.825114  405191 cri.go:89] found id: ""
	I1206 10:53:16.825128  405191 logs.go:282] 0 containers: []
	W1206 10:53:16.825135  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:16.825141  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:16.825204  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:16.851277  405191 cri.go:89] found id: ""
	I1206 10:53:16.851304  405191 logs.go:282] 0 containers: []
	W1206 10:53:16.851312  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:16.851319  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:16.851329  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:16.880918  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:16.880935  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:16.946617  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:16.946636  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:16.961739  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:16.961756  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:17.047880  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:17.038809   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:17.039588   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:17.041249   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:17.041748   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:17.043299   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:17.038809   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:17.039588   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:17.041249   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:17.041748   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:17.043299   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:17.047890  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:17.047901  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:19.616855  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:19.627228  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:19.627288  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:19.654067  405191 cri.go:89] found id: ""
	I1206 10:53:19.654081  405191 logs.go:282] 0 containers: []
	W1206 10:53:19.654088  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:19.654093  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:19.654166  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:19.679488  405191 cri.go:89] found id: ""
	I1206 10:53:19.679502  405191 logs.go:282] 0 containers: []
	W1206 10:53:19.679509  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:19.679515  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:19.679573  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:19.706620  405191 cri.go:89] found id: ""
	I1206 10:53:19.706635  405191 logs.go:282] 0 containers: []
	W1206 10:53:19.706642  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:19.706647  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:19.706706  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:19.734381  405191 cri.go:89] found id: ""
	I1206 10:53:19.734395  405191 logs.go:282] 0 containers: []
	W1206 10:53:19.734406  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:19.734412  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:19.734476  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:19.761415  405191 cri.go:89] found id: ""
	I1206 10:53:19.761429  405191 logs.go:282] 0 containers: []
	W1206 10:53:19.761436  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:19.761441  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:19.761502  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:19.787176  405191 cri.go:89] found id: ""
	I1206 10:53:19.787190  405191 logs.go:282] 0 containers: []
	W1206 10:53:19.787203  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:19.787209  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:19.787270  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:19.813067  405191 cri.go:89] found id: ""
	I1206 10:53:19.813081  405191 logs.go:282] 0 containers: []
	W1206 10:53:19.813088  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:19.813096  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:19.813105  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:19.878821  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:19.878840  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:19.894664  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:19.894680  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:19.965061  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:19.956101   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:19.957218   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:19.958973   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:19.959413   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:19.960938   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:19.956101   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:19.957218   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:19.958973   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:19.959413   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:19.960938   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:19.965101  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:19.965111  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:20.038434  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:20.038456  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:22.572942  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:22.583202  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:22.583273  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:22.608534  405191 cri.go:89] found id: ""
	I1206 10:53:22.608548  405191 logs.go:282] 0 containers: []
	W1206 10:53:22.608556  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:22.608561  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:22.608623  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:22.637655  405191 cri.go:89] found id: ""
	I1206 10:53:22.637673  405191 logs.go:282] 0 containers: []
	W1206 10:53:22.637680  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:22.637685  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:22.637748  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:22.666908  405191 cri.go:89] found id: ""
	I1206 10:53:22.666922  405191 logs.go:282] 0 containers: []
	W1206 10:53:22.666929  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:22.666935  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:22.666995  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:22.694611  405191 cri.go:89] found id: ""
	I1206 10:53:22.694625  405191 logs.go:282] 0 containers: []
	W1206 10:53:22.694633  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:22.694638  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:22.694705  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:22.720468  405191 cri.go:89] found id: ""
	I1206 10:53:22.720482  405191 logs.go:282] 0 containers: []
	W1206 10:53:22.720489  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:22.720494  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:22.720551  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:22.750061  405191 cri.go:89] found id: ""
	I1206 10:53:22.750075  405191 logs.go:282] 0 containers: []
	W1206 10:53:22.750082  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:22.750087  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:22.750148  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:22.778201  405191 cri.go:89] found id: ""
	I1206 10:53:22.778216  405191 logs.go:282] 0 containers: []
	W1206 10:53:22.778223  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:22.778230  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:22.778241  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:22.848689  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:22.848710  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:22.878893  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:22.878908  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:22.945043  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:22.945065  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:22.960966  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:22.960982  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:23.041735  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:23.033031   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:23.033838   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:23.035561   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:23.036147   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:23.037681   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:23.033031   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:23.033838   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:23.035561   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:23.036147   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:23.037681   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:25.543429  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:25.553845  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:25.553906  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:25.580411  405191 cri.go:89] found id: ""
	I1206 10:53:25.580427  405191 logs.go:282] 0 containers: []
	W1206 10:53:25.580434  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:25.580439  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:25.580498  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:25.610347  405191 cri.go:89] found id: ""
	I1206 10:53:25.610361  405191 logs.go:282] 0 containers: []
	W1206 10:53:25.610368  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:25.610373  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:25.610430  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:25.637376  405191 cri.go:89] found id: ""
	I1206 10:53:25.637390  405191 logs.go:282] 0 containers: []
	W1206 10:53:25.637398  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:25.637403  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:25.637463  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:25.666544  405191 cri.go:89] found id: ""
	I1206 10:53:25.666558  405191 logs.go:282] 0 containers: []
	W1206 10:53:25.666572  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:25.666577  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:25.666636  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:25.692777  405191 cri.go:89] found id: ""
	I1206 10:53:25.692791  405191 logs.go:282] 0 containers: []
	W1206 10:53:25.692798  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:25.692803  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:25.692865  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:25.721819  405191 cri.go:89] found id: ""
	I1206 10:53:25.721833  405191 logs.go:282] 0 containers: []
	W1206 10:53:25.721841  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:25.721845  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:25.721901  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:25.749420  405191 cri.go:89] found id: ""
	I1206 10:53:25.749435  405191 logs.go:282] 0 containers: []
	W1206 10:53:25.749442  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:25.749450  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:25.749461  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:25.817956  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:25.817979  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:25.847454  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:25.847480  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:25.913445  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:25.913464  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:25.928310  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:25.928326  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:26.010257  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:25.998143   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:25.999802   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:26.001851   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:26.002260   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:26.005429   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:25.998143   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:25.999802   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:26.001851   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:26.002260   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:26.005429   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:28.510540  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:28.521536  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:28.521597  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:28.549848  405191 cri.go:89] found id: ""
	I1206 10:53:28.549862  405191 logs.go:282] 0 containers: []
	W1206 10:53:28.549869  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:28.549880  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:28.549941  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:28.574916  405191 cri.go:89] found id: ""
	I1206 10:53:28.574929  405191 logs.go:282] 0 containers: []
	W1206 10:53:28.574937  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:28.574941  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:28.575001  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:28.603948  405191 cri.go:89] found id: ""
	I1206 10:53:28.603963  405191 logs.go:282] 0 containers: []
	W1206 10:53:28.603971  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:28.603976  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:28.604038  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:28.633100  405191 cri.go:89] found id: ""
	I1206 10:53:28.633114  405191 logs.go:282] 0 containers: []
	W1206 10:53:28.633121  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:28.633127  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:28.633186  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:28.658360  405191 cri.go:89] found id: ""
	I1206 10:53:28.658374  405191 logs.go:282] 0 containers: []
	W1206 10:53:28.658381  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:28.658386  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:28.658450  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:28.686919  405191 cri.go:89] found id: ""
	I1206 10:53:28.686933  405191 logs.go:282] 0 containers: []
	W1206 10:53:28.686949  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:28.686955  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:28.687012  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:28.713970  405191 cri.go:89] found id: ""
	I1206 10:53:28.713984  405191 logs.go:282] 0 containers: []
	W1206 10:53:28.713991  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:28.714001  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:28.714011  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:28.783354  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:28.783415  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:28.799765  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:28.799785  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:28.875190  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:28.865163   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:28.865941   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:28.867993   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:28.868552   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:28.870594   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:28.865163   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:28.865941   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:28.867993   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:28.868552   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:28.870594   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:28.875200  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:28.875211  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:28.947238  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:28.947258  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:31.487136  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:31.497608  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:31.497670  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:31.524319  405191 cri.go:89] found id: ""
	I1206 10:53:31.524333  405191 logs.go:282] 0 containers: []
	W1206 10:53:31.524341  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:31.524347  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:31.524409  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:31.550830  405191 cri.go:89] found id: ""
	I1206 10:53:31.550845  405191 logs.go:282] 0 containers: []
	W1206 10:53:31.550852  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:31.550857  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:31.550925  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:31.577502  405191 cri.go:89] found id: ""
	I1206 10:53:31.577516  405191 logs.go:282] 0 containers: []
	W1206 10:53:31.577523  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:31.577528  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:31.577587  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:31.604074  405191 cri.go:89] found id: ""
	I1206 10:53:31.604088  405191 logs.go:282] 0 containers: []
	W1206 10:53:31.604095  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:31.604100  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:31.604157  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:31.630962  405191 cri.go:89] found id: ""
	I1206 10:53:31.630976  405191 logs.go:282] 0 containers: []
	W1206 10:53:31.630984  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:31.630989  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:31.631053  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:31.656604  405191 cri.go:89] found id: ""
	I1206 10:53:31.656619  405191 logs.go:282] 0 containers: []
	W1206 10:53:31.656626  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:31.656632  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:31.656695  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:31.682731  405191 cri.go:89] found id: ""
	I1206 10:53:31.682745  405191 logs.go:282] 0 containers: []
	W1206 10:53:31.682752  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:31.682760  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:31.682771  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:31.715043  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:31.715059  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:31.780742  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:31.780762  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:31.795393  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:31.795410  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:31.863799  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:31.855344   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:31.856015   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:31.857739   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:31.858189   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:31.859847   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:31.855344   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:31.856015   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:31.857739   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:31.858189   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:31.859847   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:31.863809  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:31.863820  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:34.432706  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:34.442775  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:34.442837  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:34.468444  405191 cri.go:89] found id: ""
	I1206 10:53:34.468458  405191 logs.go:282] 0 containers: []
	W1206 10:53:34.468465  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:34.468471  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:34.468536  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:34.494328  405191 cri.go:89] found id: ""
	I1206 10:53:34.494343  405191 logs.go:282] 0 containers: []
	W1206 10:53:34.494350  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:34.494356  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:34.494418  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:34.527045  405191 cri.go:89] found id: ""
	I1206 10:53:34.527060  405191 logs.go:282] 0 containers: []
	W1206 10:53:34.527068  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:34.527076  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:34.527139  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:34.554315  405191 cri.go:89] found id: ""
	I1206 10:53:34.554328  405191 logs.go:282] 0 containers: []
	W1206 10:53:34.554335  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:34.554340  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:34.554408  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:34.579994  405191 cri.go:89] found id: ""
	I1206 10:53:34.580009  405191 logs.go:282] 0 containers: []
	W1206 10:53:34.580024  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:34.580030  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:34.580093  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:34.608896  405191 cri.go:89] found id: ""
	I1206 10:53:34.608910  405191 logs.go:282] 0 containers: []
	W1206 10:53:34.608917  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:34.608925  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:34.608983  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:34.638506  405191 cri.go:89] found id: ""
	I1206 10:53:34.638521  405191 logs.go:282] 0 containers: []
	W1206 10:53:34.638528  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:34.638536  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:34.638549  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:34.700281  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:34.691941   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:34.692724   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:34.693733   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:34.694312   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:34.696014   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:34.691941   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:34.692724   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:34.693733   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:34.694312   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:34.696014   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:34.700290  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:34.700302  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:34.773019  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:34.773040  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:34.803610  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:34.803628  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:34.870473  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:34.870498  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:37.386935  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:37.397529  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:37.397618  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:37.422529  405191 cri.go:89] found id: ""
	I1206 10:53:37.422543  405191 logs.go:282] 0 containers: []
	W1206 10:53:37.422550  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:37.422556  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:37.422613  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:37.447810  405191 cri.go:89] found id: ""
	I1206 10:53:37.447824  405191 logs.go:282] 0 containers: []
	W1206 10:53:37.447830  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:37.447836  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:37.447895  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:37.473775  405191 cri.go:89] found id: ""
	I1206 10:53:37.473794  405191 logs.go:282] 0 containers: []
	W1206 10:53:37.473801  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:37.473806  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:37.473862  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:37.499349  405191 cri.go:89] found id: ""
	I1206 10:53:37.499362  405191 logs.go:282] 0 containers: []
	W1206 10:53:37.499370  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:37.499400  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:37.499468  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:37.526194  405191 cri.go:89] found id: ""
	I1206 10:53:37.526208  405191 logs.go:282] 0 containers: []
	W1206 10:53:37.526216  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:37.526221  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:37.526286  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:37.552021  405191 cri.go:89] found id: ""
	I1206 10:53:37.552041  405191 logs.go:282] 0 containers: []
	W1206 10:53:37.552049  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:37.552054  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:37.552113  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:37.577455  405191 cri.go:89] found id: ""
	I1206 10:53:37.577469  405191 logs.go:282] 0 containers: []
	W1206 10:53:37.577476  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:37.577484  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:37.577495  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:37.605307  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:37.605324  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:37.674813  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:37.674836  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:37.689252  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:37.689268  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:37.751707  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:37.743090   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:37.743746   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:37.745542   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:37.746167   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:37.747937   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:37.743090   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:37.743746   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:37.745542   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:37.746167   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:37.747937   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:37.751719  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:37.751730  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:40.320654  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:40.331310  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:40.331372  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:40.357691  405191 cri.go:89] found id: ""
	I1206 10:53:40.357706  405191 logs.go:282] 0 containers: []
	W1206 10:53:40.357721  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:40.357726  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:40.357789  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:40.383818  405191 cri.go:89] found id: ""
	I1206 10:53:40.383833  405191 logs.go:282] 0 containers: []
	W1206 10:53:40.383841  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:40.383847  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:40.383904  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:40.412121  405191 cri.go:89] found id: ""
	I1206 10:53:40.412134  405191 logs.go:282] 0 containers: []
	W1206 10:53:40.412141  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:40.412146  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:40.412204  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:40.438527  405191 cri.go:89] found id: ""
	I1206 10:53:40.438542  405191 logs.go:282] 0 containers: []
	W1206 10:53:40.438549  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:40.438554  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:40.438616  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:40.465329  405191 cri.go:89] found id: ""
	I1206 10:53:40.465344  405191 logs.go:282] 0 containers: []
	W1206 10:53:40.465351  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:40.465356  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:40.465420  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:40.491939  405191 cri.go:89] found id: ""
	I1206 10:53:40.491952  405191 logs.go:282] 0 containers: []
	W1206 10:53:40.491960  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:40.491965  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:40.492029  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:40.516801  405191 cri.go:89] found id: ""
	I1206 10:53:40.516821  405191 logs.go:282] 0 containers: []
	W1206 10:53:40.516828  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:40.516836  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:40.516848  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:40.593042  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:40.593062  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:40.608966  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:40.608986  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:40.675818  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:40.665869   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:40.667834   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:40.668210   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:40.669803   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:40.670394   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:40.665869   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:40.667834   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:40.668210   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:40.669803   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:40.670394   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:40.675828  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:40.675841  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:40.744680  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:40.744702  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:43.275550  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:43.285722  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:43.285783  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:43.312235  405191 cri.go:89] found id: ""
	I1206 10:53:43.312249  405191 logs.go:282] 0 containers: []
	W1206 10:53:43.312262  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:43.312278  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:43.312337  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:43.338204  405191 cri.go:89] found id: ""
	I1206 10:53:43.338219  405191 logs.go:282] 0 containers: []
	W1206 10:53:43.338226  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:43.338249  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:43.338321  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:43.363434  405191 cri.go:89] found id: ""
	I1206 10:53:43.363455  405191 logs.go:282] 0 containers: []
	W1206 10:53:43.363463  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:43.363480  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:43.363562  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:43.390724  405191 cri.go:89] found id: ""
	I1206 10:53:43.390738  405191 logs.go:282] 0 containers: []
	W1206 10:53:43.390745  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:43.390750  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:43.390824  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:43.416427  405191 cri.go:89] found id: ""
	I1206 10:53:43.416442  405191 logs.go:282] 0 containers: []
	W1206 10:53:43.416449  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:43.416454  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:43.416511  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:43.446598  405191 cri.go:89] found id: ""
	I1206 10:53:43.446612  405191 logs.go:282] 0 containers: []
	W1206 10:53:43.446619  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:43.446625  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:43.446695  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:43.472759  405191 cri.go:89] found id: ""
	I1206 10:53:43.472773  405191 logs.go:282] 0 containers: []
	W1206 10:53:43.472779  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:43.472787  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:43.472797  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:43.538686  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:43.538706  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:43.553731  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:43.553746  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:43.618535  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:43.609715   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:43.610485   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:43.612195   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:43.612812   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:43.614536   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:43.609715   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:43.610485   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:43.612195   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:43.612812   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:43.614536   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:43.618556  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:43.618570  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:43.690132  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:43.690152  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:46.225047  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:46.236105  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:46.236179  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:46.269036  405191 cri.go:89] found id: ""
	I1206 10:53:46.269066  405191 logs.go:282] 0 containers: []
	W1206 10:53:46.269074  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:46.269079  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:46.269151  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:46.300616  405191 cri.go:89] found id: ""
	I1206 10:53:46.300631  405191 logs.go:282] 0 containers: []
	W1206 10:53:46.300639  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:46.300645  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:46.300707  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:46.330077  405191 cri.go:89] found id: ""
	I1206 10:53:46.330102  405191 logs.go:282] 0 containers: []
	W1206 10:53:46.330110  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:46.330115  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:46.330189  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:46.361893  405191 cri.go:89] found id: ""
	I1206 10:53:46.361908  405191 logs.go:282] 0 containers: []
	W1206 10:53:46.361915  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:46.361920  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:46.361991  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:46.387920  405191 cri.go:89] found id: ""
	I1206 10:53:46.387934  405191 logs.go:282] 0 containers: []
	W1206 10:53:46.387941  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:46.387947  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:46.388006  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:46.415440  405191 cri.go:89] found id: ""
	I1206 10:53:46.415463  405191 logs.go:282] 0 containers: []
	W1206 10:53:46.415470  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:46.415475  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:46.415534  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:46.442198  405191 cri.go:89] found id: ""
	I1206 10:53:46.442211  405191 logs.go:282] 0 containers: []
	W1206 10:53:46.442219  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:46.442226  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:46.442239  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:46.457274  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:46.457290  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:46.520346  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:46.512290   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:46.512824   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:46.514476   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:46.514946   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:46.516438   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:46.512290   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:46.512824   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:46.514476   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:46.514946   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:46.516438   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:46.520388  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:46.520399  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:46.595642  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:46.595673  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:46.626749  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:46.626769  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:49.193445  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:49.203743  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:49.203807  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:49.233557  405191 cri.go:89] found id: ""
	I1206 10:53:49.233571  405191 logs.go:282] 0 containers: []
	W1206 10:53:49.233578  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:49.233583  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:49.233643  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:49.265569  405191 cri.go:89] found id: ""
	I1206 10:53:49.265583  405191 logs.go:282] 0 containers: []
	W1206 10:53:49.265590  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:49.265595  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:49.265651  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:49.296146  405191 cri.go:89] found id: ""
	I1206 10:53:49.296159  405191 logs.go:282] 0 containers: []
	W1206 10:53:49.296166  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:49.296172  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:49.296232  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:49.321471  405191 cri.go:89] found id: ""
	I1206 10:53:49.321485  405191 logs.go:282] 0 containers: []
	W1206 10:53:49.321492  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:49.321498  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:49.321556  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:49.346537  405191 cri.go:89] found id: ""
	I1206 10:53:49.346551  405191 logs.go:282] 0 containers: []
	W1206 10:53:49.346571  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:49.346577  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:49.346693  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:49.372292  405191 cri.go:89] found id: ""
	I1206 10:53:49.372307  405191 logs.go:282] 0 containers: []
	W1206 10:53:49.372314  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:49.372320  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:49.372382  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:49.397395  405191 cri.go:89] found id: ""
	I1206 10:53:49.397408  405191 logs.go:282] 0 containers: []
	W1206 10:53:49.397415  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:49.397422  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:49.397432  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:49.464359  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:49.464378  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:49.479746  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:49.479762  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:49.542949  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:49.534167   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:49.534752   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:49.536598   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:49.537091   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:49.538580   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:49.534167   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:49.534752   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:49.536598   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:49.537091   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:49.538580   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:49.542959  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:49.542969  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:49.612749  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:49.612769  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:52.142276  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:52.152804  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:52.152867  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:52.179560  405191 cri.go:89] found id: ""
	I1206 10:53:52.179575  405191 logs.go:282] 0 containers: []
	W1206 10:53:52.179582  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:52.179587  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:52.179642  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:52.204827  405191 cri.go:89] found id: ""
	I1206 10:53:52.204842  405191 logs.go:282] 0 containers: []
	W1206 10:53:52.204849  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:52.204854  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:52.204917  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:52.250790  405191 cri.go:89] found id: ""
	I1206 10:53:52.250804  405191 logs.go:282] 0 containers: []
	W1206 10:53:52.250811  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:52.250816  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:52.250886  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:52.282140  405191 cri.go:89] found id: ""
	I1206 10:53:52.282153  405191 logs.go:282] 0 containers: []
	W1206 10:53:52.282161  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:52.282166  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:52.282225  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:52.314373  405191 cri.go:89] found id: ""
	I1206 10:53:52.314387  405191 logs.go:282] 0 containers: []
	W1206 10:53:52.314395  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:52.314400  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:52.314471  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:52.339037  405191 cri.go:89] found id: ""
	I1206 10:53:52.339051  405191 logs.go:282] 0 containers: []
	W1206 10:53:52.339058  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:52.339064  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:52.339124  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:52.366113  405191 cri.go:89] found id: ""
	I1206 10:53:52.366127  405191 logs.go:282] 0 containers: []
	W1206 10:53:52.366134  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:52.366142  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:52.366152  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:52.436368  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:52.436388  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:52.451468  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:52.451487  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:52.518739  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:52.509542   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:52.509966   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:52.511603   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:52.511955   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:52.513754   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:52.509542   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:52.509966   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:52.511603   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:52.511955   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:52.513754   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:52.518760  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:52.518777  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:52.593784  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:52.593805  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:55.124735  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:55.135510  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:55.135574  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:55.162613  405191 cri.go:89] found id: ""
	I1206 10:53:55.162626  405191 logs.go:282] 0 containers: []
	W1206 10:53:55.162633  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:55.162638  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:55.162703  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:55.189655  405191 cri.go:89] found id: ""
	I1206 10:53:55.189669  405191 logs.go:282] 0 containers: []
	W1206 10:53:55.189676  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:55.189682  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:55.189786  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:55.215289  405191 cri.go:89] found id: ""
	I1206 10:53:55.215303  405191 logs.go:282] 0 containers: []
	W1206 10:53:55.215310  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:55.215315  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:55.215402  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:55.247890  405191 cri.go:89] found id: ""
	I1206 10:53:55.247913  405191 logs.go:282] 0 containers: []
	W1206 10:53:55.247921  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:55.247926  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:55.247992  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:55.283368  405191 cri.go:89] found id: ""
	I1206 10:53:55.283409  405191 logs.go:282] 0 containers: []
	W1206 10:53:55.283416  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:55.283422  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:55.283516  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:55.310596  405191 cri.go:89] found id: ""
	I1206 10:53:55.310609  405191 logs.go:282] 0 containers: []
	W1206 10:53:55.310627  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:55.310632  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:55.310712  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:55.337361  405191 cri.go:89] found id: ""
	I1206 10:53:55.337374  405191 logs.go:282] 0 containers: []
	W1206 10:53:55.337381  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:55.337389  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:55.337399  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:55.404341  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:55.404361  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:55.419687  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:55.419705  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:55.485498  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:55.476614   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:55.477821   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:55.478840   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:55.479810   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:55.480435   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:55.476614   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:55.477821   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:55.478840   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:55.479810   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:55.480435   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:55.485509  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:55.485522  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:55.555911  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:55.555932  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:58.088179  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:58.099010  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:58.099069  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:58.124686  405191 cri.go:89] found id: ""
	I1206 10:53:58.124700  405191 logs.go:282] 0 containers: []
	W1206 10:53:58.124710  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:58.124716  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:58.124773  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:58.149717  405191 cri.go:89] found id: ""
	I1206 10:53:58.149730  405191 logs.go:282] 0 containers: []
	W1206 10:53:58.149738  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:58.149743  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:58.149800  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:58.177293  405191 cri.go:89] found id: ""
	I1206 10:53:58.177307  405191 logs.go:282] 0 containers: []
	W1206 10:53:58.177314  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:58.177319  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:58.177389  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:58.203540  405191 cri.go:89] found id: ""
	I1206 10:53:58.203554  405191 logs.go:282] 0 containers: []
	W1206 10:53:58.203562  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:58.203567  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:58.203632  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:58.237354  405191 cri.go:89] found id: ""
	I1206 10:53:58.237377  405191 logs.go:282] 0 containers: []
	W1206 10:53:58.237385  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:58.237390  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:58.237459  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:58.269725  405191 cri.go:89] found id: ""
	I1206 10:53:58.269739  405191 logs.go:282] 0 containers: []
	W1206 10:53:58.269746  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:58.269751  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:58.269821  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:58.297406  405191 cri.go:89] found id: ""
	I1206 10:53:58.297420  405191 logs.go:282] 0 containers: []
	W1206 10:53:58.297427  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:58.297435  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:58.297445  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:58.363296  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:58.363319  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:58.379154  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:58.379170  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:58.448306  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:58.438857   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:58.439654   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:58.441442   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:58.441790   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:58.443511   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:58.438857   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:58.439654   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:58.441442   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:58.441790   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:58.443511   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:58.448317  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:58.448331  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:58.518384  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:58.518408  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:01.052183  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:01.062404  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:01.062462  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:01.087509  405191 cri.go:89] found id: ""
	I1206 10:54:01.087523  405191 logs.go:282] 0 containers: []
	W1206 10:54:01.087530  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:01.087536  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:01.087598  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:01.113371  405191 cri.go:89] found id: ""
	I1206 10:54:01.113385  405191 logs.go:282] 0 containers: []
	W1206 10:54:01.113392  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:01.113397  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:01.113456  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:01.140194  405191 cri.go:89] found id: ""
	I1206 10:54:01.140208  405191 logs.go:282] 0 containers: []
	W1206 10:54:01.140214  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:01.140220  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:01.140282  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:01.166431  405191 cri.go:89] found id: ""
	I1206 10:54:01.166445  405191 logs.go:282] 0 containers: []
	W1206 10:54:01.166452  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:01.166460  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:01.166523  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:01.195742  405191 cri.go:89] found id: ""
	I1206 10:54:01.195756  405191 logs.go:282] 0 containers: []
	W1206 10:54:01.195764  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:01.195769  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:01.195835  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:01.228731  405191 cri.go:89] found id: ""
	I1206 10:54:01.228746  405191 logs.go:282] 0 containers: []
	W1206 10:54:01.228753  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:01.228759  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:01.228821  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:01.260175  405191 cri.go:89] found id: ""
	I1206 10:54:01.260189  405191 logs.go:282] 0 containers: []
	W1206 10:54:01.260196  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:01.260204  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:01.260214  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:01.337819  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:01.337839  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:01.353486  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:01.353502  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:01.423278  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:01.414904   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:01.415292   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:01.417033   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:01.417517   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:01.418780   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:01.414904   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:01.415292   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:01.417033   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:01.417517   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:01.418780   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:01.423288  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:01.423299  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:01.492536  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:01.492556  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:04.028526  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:04.039535  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:04.039600  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:04.069150  405191 cri.go:89] found id: ""
	I1206 10:54:04.069164  405191 logs.go:282] 0 containers: []
	W1206 10:54:04.069172  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:04.069177  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:04.069238  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:04.100343  405191 cri.go:89] found id: ""
	I1206 10:54:04.100357  405191 logs.go:282] 0 containers: []
	W1206 10:54:04.100364  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:04.100369  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:04.100431  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:04.127347  405191 cri.go:89] found id: ""
	I1206 10:54:04.127361  405191 logs.go:282] 0 containers: []
	W1206 10:54:04.127368  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:04.127395  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:04.127466  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:04.154542  405191 cri.go:89] found id: ""
	I1206 10:54:04.154557  405191 logs.go:282] 0 containers: []
	W1206 10:54:04.154564  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:04.154569  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:04.154628  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:04.181647  405191 cri.go:89] found id: ""
	I1206 10:54:04.181661  405191 logs.go:282] 0 containers: []
	W1206 10:54:04.181668  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:04.181676  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:04.181739  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:04.210872  405191 cri.go:89] found id: ""
	I1206 10:54:04.210886  405191 logs.go:282] 0 containers: []
	W1206 10:54:04.210893  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:04.210899  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:04.210962  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:04.246454  405191 cri.go:89] found id: ""
	I1206 10:54:04.246468  405191 logs.go:282] 0 containers: []
	W1206 10:54:04.246482  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:04.246490  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:04.246501  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:04.322848  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:04.322872  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:04.338928  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:04.338945  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:04.409905  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:04.400164   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:04.400961   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:04.402781   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:04.403461   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:04.404662   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:04.400164   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:04.400961   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:04.402781   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:04.403461   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:04.404662   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:04.409916  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:04.409928  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:04.480369  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:04.480389  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:07.012345  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:07.022891  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:07.022962  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:07.049835  405191 cri.go:89] found id: ""
	I1206 10:54:07.049849  405191 logs.go:282] 0 containers: []
	W1206 10:54:07.049856  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:07.049861  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:07.049925  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:07.076617  405191 cri.go:89] found id: ""
	I1206 10:54:07.076631  405191 logs.go:282] 0 containers: []
	W1206 10:54:07.076637  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:07.076643  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:07.076704  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:07.103202  405191 cri.go:89] found id: ""
	I1206 10:54:07.103216  405191 logs.go:282] 0 containers: []
	W1206 10:54:07.103223  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:07.103229  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:07.103288  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:07.129964  405191 cri.go:89] found id: ""
	I1206 10:54:07.129977  405191 logs.go:282] 0 containers: []
	W1206 10:54:07.129984  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:07.129989  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:07.130048  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:07.157459  405191 cri.go:89] found id: ""
	I1206 10:54:07.157473  405191 logs.go:282] 0 containers: []
	W1206 10:54:07.157480  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:07.157485  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:07.157551  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:07.183797  405191 cri.go:89] found id: ""
	I1206 10:54:07.183811  405191 logs.go:282] 0 containers: []
	W1206 10:54:07.183818  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:07.183823  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:07.183881  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:07.209675  405191 cri.go:89] found id: ""
	I1206 10:54:07.209689  405191 logs.go:282] 0 containers: []
	W1206 10:54:07.209697  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:07.209704  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:07.209715  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:07.228202  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:07.228225  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:07.312770  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:07.304201   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:07.304672   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:07.306492   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:07.307083   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:07.308768   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:07.304201   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:07.304672   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:07.306492   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:07.307083   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:07.308768   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:07.312782  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:07.312792  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:07.383254  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:07.383275  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:07.414045  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:07.414060  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:09.985551  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:09.995745  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:09.995806  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:10.030868  405191 cri.go:89] found id: ""
	I1206 10:54:10.030884  405191 logs.go:282] 0 containers: []
	W1206 10:54:10.030892  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:10.030898  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:10.030967  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:10.060505  405191 cri.go:89] found id: ""
	I1206 10:54:10.060520  405191 logs.go:282] 0 containers: []
	W1206 10:54:10.060527  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:10.060532  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:10.060596  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:10.087945  405191 cri.go:89] found id: ""
	I1206 10:54:10.087979  405191 logs.go:282] 0 containers: []
	W1206 10:54:10.087986  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:10.087992  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:10.088069  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:10.116434  405191 cri.go:89] found id: ""
	I1206 10:54:10.116448  405191 logs.go:282] 0 containers: []
	W1206 10:54:10.116455  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:10.116461  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:10.116523  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:10.144560  405191 cri.go:89] found id: ""
	I1206 10:54:10.144572  405191 logs.go:282] 0 containers: []
	W1206 10:54:10.144579  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:10.144584  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:10.144645  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:10.173019  405191 cri.go:89] found id: ""
	I1206 10:54:10.173033  405191 logs.go:282] 0 containers: []
	W1206 10:54:10.173040  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:10.173046  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:10.173105  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:10.200809  405191 cri.go:89] found id: ""
	I1206 10:54:10.200823  405191 logs.go:282] 0 containers: []
	W1206 10:54:10.200830  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:10.200837  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:10.200847  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:10.215623  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:10.215642  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:10.300302  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:10.291573   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:10.292121   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:10.293856   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:10.294451   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:10.295989   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:10.291573   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:10.292121   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:10.293856   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:10.294451   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:10.295989   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:10.300314  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:10.300325  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:10.369603  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:10.369624  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:10.402671  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:10.402687  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:12.968162  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:12.978411  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:12.978473  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:13.010579  405191 cri.go:89] found id: ""
	I1206 10:54:13.010593  405191 logs.go:282] 0 containers: []
	W1206 10:54:13.010601  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:13.010606  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:13.010669  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:13.037103  405191 cri.go:89] found id: ""
	I1206 10:54:13.037118  405191 logs.go:282] 0 containers: []
	W1206 10:54:13.037125  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:13.037131  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:13.037199  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:13.063109  405191 cri.go:89] found id: ""
	I1206 10:54:13.063124  405191 logs.go:282] 0 containers: []
	W1206 10:54:13.063131  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:13.063136  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:13.063195  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:13.088780  405191 cri.go:89] found id: ""
	I1206 10:54:13.088794  405191 logs.go:282] 0 containers: []
	W1206 10:54:13.088801  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:13.088806  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:13.088868  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:13.114682  405191 cri.go:89] found id: ""
	I1206 10:54:13.114696  405191 logs.go:282] 0 containers: []
	W1206 10:54:13.114703  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:13.114708  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:13.114952  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:13.141850  405191 cri.go:89] found id: ""
	I1206 10:54:13.141866  405191 logs.go:282] 0 containers: []
	W1206 10:54:13.141873  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:13.141880  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:13.141945  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:13.167958  405191 cri.go:89] found id: ""
	I1206 10:54:13.167975  405191 logs.go:282] 0 containers: []
	W1206 10:54:13.167982  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:13.167990  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:13.168002  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:13.237314  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:13.237335  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:13.254137  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:13.254164  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:13.322226  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:13.313022   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:13.313640   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:13.315145   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:13.315780   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:13.317512   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:13.313022   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:13.313640   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:13.315145   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:13.315780   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:13.317512   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:13.322237  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:13.322248  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:13.394938  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:13.394958  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:15.923162  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:15.933287  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:15.933346  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:15.958679  405191 cri.go:89] found id: ""
	I1206 10:54:15.958694  405191 logs.go:282] 0 containers: []
	W1206 10:54:15.958701  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:15.958706  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:15.958768  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:15.986252  405191 cri.go:89] found id: ""
	I1206 10:54:15.986267  405191 logs.go:282] 0 containers: []
	W1206 10:54:15.986274  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:15.986279  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:15.986339  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:16.015947  405191 cri.go:89] found id: ""
	I1206 10:54:16.015961  405191 logs.go:282] 0 containers: []
	W1206 10:54:16.015968  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:16.015973  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:16.016038  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:16.046583  405191 cri.go:89] found id: ""
	I1206 10:54:16.046597  405191 logs.go:282] 0 containers: []
	W1206 10:54:16.046604  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:16.046609  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:16.046673  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:16.073401  405191 cri.go:89] found id: ""
	I1206 10:54:16.073415  405191 logs.go:282] 0 containers: []
	W1206 10:54:16.073422  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:16.073428  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:16.073489  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:16.099301  405191 cri.go:89] found id: ""
	I1206 10:54:16.099315  405191 logs.go:282] 0 containers: []
	W1206 10:54:16.099321  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:16.099327  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:16.099409  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:16.132045  405191 cri.go:89] found id: ""
	I1206 10:54:16.132060  405191 logs.go:282] 0 containers: []
	W1206 10:54:16.132067  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:16.132075  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:16.132086  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:16.201949  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:16.191660   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:16.193954   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:16.194695   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:16.196370   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:16.196866   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:16.191660   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:16.193954   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:16.194695   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:16.196370   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:16.196866   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:16.201962  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:16.201972  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:16.277750  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:16.277769  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:16.311130  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:16.311148  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:16.377771  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:16.377793  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:18.893108  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:18.903283  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:18.903345  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:18.927861  405191 cri.go:89] found id: ""
	I1206 10:54:18.927875  405191 logs.go:282] 0 containers: []
	W1206 10:54:18.927882  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:18.927887  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:18.927945  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:18.953460  405191 cri.go:89] found id: ""
	I1206 10:54:18.953474  405191 logs.go:282] 0 containers: []
	W1206 10:54:18.953482  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:18.953486  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:18.953563  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:18.981063  405191 cri.go:89] found id: ""
	I1206 10:54:18.981077  405191 logs.go:282] 0 containers: []
	W1206 10:54:18.981088  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:18.981093  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:18.981154  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:19.011134  405191 cri.go:89] found id: ""
	I1206 10:54:19.011148  405191 logs.go:282] 0 containers: []
	W1206 10:54:19.011156  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:19.011161  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:19.011221  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:19.037866  405191 cri.go:89] found id: ""
	I1206 10:54:19.037889  405191 logs.go:282] 0 containers: []
	W1206 10:54:19.037895  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:19.037901  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:19.037972  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:19.067672  405191 cri.go:89] found id: ""
	I1206 10:54:19.067685  405191 logs.go:282] 0 containers: []
	W1206 10:54:19.067692  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:19.067697  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:19.067753  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:19.092891  405191 cri.go:89] found id: ""
	I1206 10:54:19.092906  405191 logs.go:282] 0 containers: []
	W1206 10:54:19.092913  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:19.092921  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:19.092933  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:19.158186  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:19.149512   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:19.150168   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:19.151831   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:19.152364   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:19.154131   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:19.149512   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:19.150168   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:19.151831   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:19.152364   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:19.154131   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:19.158196  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:19.158209  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:19.231681  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:19.231701  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:19.267680  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:19.267704  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:19.341777  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:19.341796  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:21.856895  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:21.867600  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:21.867659  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:21.899562  405191 cri.go:89] found id: ""
	I1206 10:54:21.899576  405191 logs.go:282] 0 containers: []
	W1206 10:54:21.899583  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:21.899589  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:21.899647  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:21.924433  405191 cri.go:89] found id: ""
	I1206 10:54:21.924446  405191 logs.go:282] 0 containers: []
	W1206 10:54:21.924454  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:21.924459  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:21.924517  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:21.949461  405191 cri.go:89] found id: ""
	I1206 10:54:21.949476  405191 logs.go:282] 0 containers: []
	W1206 10:54:21.949482  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:21.949493  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:21.949550  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:21.976373  405191 cri.go:89] found id: ""
	I1206 10:54:21.976388  405191 logs.go:282] 0 containers: []
	W1206 10:54:21.976396  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:21.976401  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:21.976457  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:22.025051  405191 cri.go:89] found id: ""
	I1206 10:54:22.025074  405191 logs.go:282] 0 containers: []
	W1206 10:54:22.025095  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:22.025101  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:22.025214  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:22.054790  405191 cri.go:89] found id: ""
	I1206 10:54:22.054804  405191 logs.go:282] 0 containers: []
	W1206 10:54:22.054811  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:22.054817  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:22.054873  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:22.081220  405191 cri.go:89] found id: ""
	I1206 10:54:22.081235  405191 logs.go:282] 0 containers: []
	W1206 10:54:22.081242  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:22.081251  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:22.081262  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:22.147339  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:22.147359  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:22.162252  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:22.162268  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:22.233807  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:22.219327   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:22.220102   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:22.225452   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:22.227256   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:22.228864   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:22.219327   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:22.220102   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:22.225452   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:22.227256   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:22.228864   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:22.233819  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:22.233838  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:22.312101  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:22.312123  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:24.852672  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:24.863210  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:24.863271  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:24.889673  405191 cri.go:89] found id: ""
	I1206 10:54:24.889687  405191 logs.go:282] 0 containers: []
	W1206 10:54:24.889695  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:24.889700  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:24.889758  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:24.920816  405191 cri.go:89] found id: ""
	I1206 10:54:24.920830  405191 logs.go:282] 0 containers: []
	W1206 10:54:24.920837  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:24.920842  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:24.920900  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:24.945958  405191 cri.go:89] found id: ""
	I1206 10:54:24.945972  405191 logs.go:282] 0 containers: []
	W1206 10:54:24.945980  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:24.945985  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:24.946046  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:24.970886  405191 cri.go:89] found id: ""
	I1206 10:54:24.970900  405191 logs.go:282] 0 containers: []
	W1206 10:54:24.970907  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:24.970912  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:24.970970  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:25.000298  405191 cri.go:89] found id: ""
	I1206 10:54:25.000315  405191 logs.go:282] 0 containers: []
	W1206 10:54:25.000323  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:25.000329  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:25.000399  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:25.033867  405191 cri.go:89] found id: ""
	I1206 10:54:25.033882  405191 logs.go:282] 0 containers: []
	W1206 10:54:25.033890  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:25.033895  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:25.033960  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:25.060149  405191 cri.go:89] found id: ""
	I1206 10:54:25.060162  405191 logs.go:282] 0 containers: []
	W1206 10:54:25.060169  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:25.060177  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:25.060188  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:25.128734  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:25.120144   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:25.120771   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:25.122547   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:25.123145   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:25.124861   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:25.120144   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:25.120771   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:25.122547   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:25.123145   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:25.124861   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:25.128746  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:25.128757  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:25.198421  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:25.198443  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:25.239321  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:25.239341  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:25.316857  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:25.316878  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:27.833465  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:27.844470  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:27.844528  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:27.870606  405191 cri.go:89] found id: ""
	I1206 10:54:27.870621  405191 logs.go:282] 0 containers: []
	W1206 10:54:27.870628  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:27.870633  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:27.870693  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:27.894893  405191 cri.go:89] found id: ""
	I1206 10:54:27.894906  405191 logs.go:282] 0 containers: []
	W1206 10:54:27.894913  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:27.894918  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:27.894973  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:27.920116  405191 cri.go:89] found id: ""
	I1206 10:54:27.920129  405191 logs.go:282] 0 containers: []
	W1206 10:54:27.920136  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:27.920142  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:27.920201  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:27.946774  405191 cri.go:89] found id: ""
	I1206 10:54:27.946788  405191 logs.go:282] 0 containers: []
	W1206 10:54:27.946798  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:27.946806  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:27.946869  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:27.973164  405191 cri.go:89] found id: ""
	I1206 10:54:27.973178  405191 logs.go:282] 0 containers: []
	W1206 10:54:27.973185  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:27.973190  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:27.973247  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:28.005225  405191 cri.go:89] found id: ""
	I1206 10:54:28.005240  405191 logs.go:282] 0 containers: []
	W1206 10:54:28.005248  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:28.005255  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:28.005329  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:28.034341  405191 cri.go:89] found id: ""
	I1206 10:54:28.034355  405191 logs.go:282] 0 containers: []
	W1206 10:54:28.034362  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:28.034370  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:28.034381  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:28.107547  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:28.107567  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:28.136561  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:28.136578  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:28.206187  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:28.206206  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:28.224556  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:28.224580  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:28.311110  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:28.302520   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:28.303509   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:28.305089   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:28.305582   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:28.307158   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:28.302520   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:28.303509   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:28.305089   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:28.305582   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:28.307158   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:30.811550  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:30.821711  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:30.821769  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:30.850956  405191 cri.go:89] found id: ""
	I1206 10:54:30.850970  405191 logs.go:282] 0 containers: []
	W1206 10:54:30.850979  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:30.850984  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:30.851045  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:30.876542  405191 cri.go:89] found id: ""
	I1206 10:54:30.876558  405191 logs.go:282] 0 containers: []
	W1206 10:54:30.876565  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:30.876571  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:30.876630  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:30.902552  405191 cri.go:89] found id: ""
	I1206 10:54:30.902566  405191 logs.go:282] 0 containers: []
	W1206 10:54:30.902573  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:30.902578  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:30.902635  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:30.928737  405191 cri.go:89] found id: ""
	I1206 10:54:30.928751  405191 logs.go:282] 0 containers: []
	W1206 10:54:30.928758  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:30.928764  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:30.928829  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:30.954309  405191 cri.go:89] found id: ""
	I1206 10:54:30.954323  405191 logs.go:282] 0 containers: []
	W1206 10:54:30.954330  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:30.954335  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:30.954394  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:30.980239  405191 cri.go:89] found id: ""
	I1206 10:54:30.980251  405191 logs.go:282] 0 containers: []
	W1206 10:54:30.980258  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:30.980263  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:30.980319  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:31.010962  405191 cri.go:89] found id: ""
	I1206 10:54:31.010977  405191 logs.go:282] 0 containers: []
	W1206 10:54:31.010985  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:31.010994  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:31.011006  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:31.078259  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:31.069995   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:31.070621   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:31.072176   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:31.072646   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:31.074155   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:31.069995   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:31.070621   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:31.072176   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:31.072646   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:31.074155   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:31.078270  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:31.078282  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:31.147428  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:31.147455  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:31.181028  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:31.181045  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:31.253555  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:31.253574  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:33.770610  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:33.781236  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:33.781299  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:33.806547  405191 cri.go:89] found id: ""
	I1206 10:54:33.806561  405191 logs.go:282] 0 containers: []
	W1206 10:54:33.806568  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:33.806574  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:33.806632  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:33.832359  405191 cri.go:89] found id: ""
	I1206 10:54:33.832371  405191 logs.go:282] 0 containers: []
	W1206 10:54:33.832379  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:33.832383  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:33.832442  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:33.857194  405191 cri.go:89] found id: ""
	I1206 10:54:33.857207  405191 logs.go:282] 0 containers: []
	W1206 10:54:33.857214  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:33.857219  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:33.857280  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:33.886113  405191 cri.go:89] found id: ""
	I1206 10:54:33.886126  405191 logs.go:282] 0 containers: []
	W1206 10:54:33.886133  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:33.886138  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:33.886194  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:33.914351  405191 cri.go:89] found id: ""
	I1206 10:54:33.914364  405191 logs.go:282] 0 containers: []
	W1206 10:54:33.914371  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:33.914376  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:33.914438  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:33.939584  405191 cri.go:89] found id: ""
	I1206 10:54:33.939598  405191 logs.go:282] 0 containers: []
	W1206 10:54:33.939605  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:33.939611  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:33.939683  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:33.965467  405191 cri.go:89] found id: ""
	I1206 10:54:33.965481  405191 logs.go:282] 0 containers: []
	W1206 10:54:33.965488  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:33.965496  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:33.965506  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:34.034434  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:34.034456  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:34.068244  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:34.068263  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:34.136528  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:34.136548  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:34.151695  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:34.151713  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:34.237655  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:34.227619   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:34.228750   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:34.231547   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:34.232096   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:34.233598   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:34.227619   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:34.228750   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:34.231547   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:34.232096   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:34.233598   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:36.737997  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:36.748632  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:36.748739  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:36.774541  405191 cri.go:89] found id: ""
	I1206 10:54:36.774554  405191 logs.go:282] 0 containers: []
	W1206 10:54:36.774563  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:36.774568  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:36.774628  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:36.804563  405191 cri.go:89] found id: ""
	I1206 10:54:36.804577  405191 logs.go:282] 0 containers: []
	W1206 10:54:36.804585  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:36.804590  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:36.804649  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:36.829295  405191 cri.go:89] found id: ""
	I1206 10:54:36.829309  405191 logs.go:282] 0 containers: []
	W1206 10:54:36.829316  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:36.829322  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:36.829384  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:36.854740  405191 cri.go:89] found id: ""
	I1206 10:54:36.854754  405191 logs.go:282] 0 containers: []
	W1206 10:54:36.854761  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:36.854767  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:36.854827  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:36.879535  405191 cri.go:89] found id: ""
	I1206 10:54:36.879548  405191 logs.go:282] 0 containers: []
	W1206 10:54:36.879555  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:36.879560  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:36.879621  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:36.908804  405191 cri.go:89] found id: ""
	I1206 10:54:36.908818  405191 logs.go:282] 0 containers: []
	W1206 10:54:36.908826  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:36.908831  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:36.908891  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:36.935290  405191 cri.go:89] found id: ""
	I1206 10:54:36.935312  405191 logs.go:282] 0 containers: []
	W1206 10:54:36.935320  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:36.935328  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:36.935338  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:37.005221  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:37.005253  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:37.023044  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:37.023070  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:37.090033  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:37.082290   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:37.082864   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:37.084384   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:37.084721   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:37.086198   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:37.082290   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:37.082864   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:37.084384   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:37.084721   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:37.086198   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:37.090044  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:37.090055  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:37.158891  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:37.158911  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:39.688451  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:39.698958  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:39.699020  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:39.725003  405191 cri.go:89] found id: ""
	I1206 10:54:39.725017  405191 logs.go:282] 0 containers: []
	W1206 10:54:39.725024  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:39.725029  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:39.725086  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:39.750186  405191 cri.go:89] found id: ""
	I1206 10:54:39.750208  405191 logs.go:282] 0 containers: []
	W1206 10:54:39.750215  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:39.750221  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:39.750286  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:39.777512  405191 cri.go:89] found id: ""
	I1206 10:54:39.777527  405191 logs.go:282] 0 containers: []
	W1206 10:54:39.777534  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:39.777539  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:39.777598  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:39.805960  405191 cri.go:89] found id: ""
	I1206 10:54:39.805974  405191 logs.go:282] 0 containers: []
	W1206 10:54:39.805981  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:39.805987  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:39.806048  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:39.832070  405191 cri.go:89] found id: ""
	I1206 10:54:39.832086  405191 logs.go:282] 0 containers: []
	W1206 10:54:39.832093  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:39.832099  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:39.832162  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:39.856950  405191 cri.go:89] found id: ""
	I1206 10:54:39.856964  405191 logs.go:282] 0 containers: []
	W1206 10:54:39.856970  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:39.856976  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:39.857034  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:39.882830  405191 cri.go:89] found id: ""
	I1206 10:54:39.882844  405191 logs.go:282] 0 containers: []
	W1206 10:54:39.882851  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:39.882859  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:39.882869  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:39.948996  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:39.949016  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:39.964250  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:39.964266  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:40.040200  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:40.026040   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:40.026898   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:40.028891   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:40.029963   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:40.030727   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:40.026040   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:40.026898   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:40.028891   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:40.029963   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:40.030727   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:40.040211  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:40.040222  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:40.112805  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:40.112828  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:42.645898  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:42.656339  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:42.656399  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:42.681441  405191 cri.go:89] found id: ""
	I1206 10:54:42.681456  405191 logs.go:282] 0 containers: []
	W1206 10:54:42.681462  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:42.681468  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:42.681529  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:42.706692  405191 cri.go:89] found id: ""
	I1206 10:54:42.706706  405191 logs.go:282] 0 containers: []
	W1206 10:54:42.706713  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:42.706718  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:42.706781  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:42.734049  405191 cri.go:89] found id: ""
	I1206 10:54:42.734063  405191 logs.go:282] 0 containers: []
	W1206 10:54:42.734070  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:42.734075  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:42.734136  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:42.759095  405191 cri.go:89] found id: ""
	I1206 10:54:42.759115  405191 logs.go:282] 0 containers: []
	W1206 10:54:42.759123  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:42.759128  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:42.759190  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:42.786861  405191 cri.go:89] found id: ""
	I1206 10:54:42.786875  405191 logs.go:282] 0 containers: []
	W1206 10:54:42.786882  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:42.786887  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:42.786949  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:42.817648  405191 cri.go:89] found id: ""
	I1206 10:54:42.817663  405191 logs.go:282] 0 containers: []
	W1206 10:54:42.817670  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:42.817675  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:42.817738  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:42.844223  405191 cri.go:89] found id: ""
	I1206 10:54:42.844245  405191 logs.go:282] 0 containers: []
	W1206 10:54:42.844253  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:42.844261  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:42.844278  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:42.914866  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:42.904424   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:42.904903   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:42.907237   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:42.908578   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:42.909360   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:42.904424   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:42.904903   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:42.907237   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:42.908578   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:42.909360   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:42.914877  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:42.914888  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:42.987160  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:42.987181  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:43.017513  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:43.017529  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:43.084573  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:43.084595  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:45.600685  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:45.611239  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:45.611299  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:45.635510  405191 cri.go:89] found id: ""
	I1206 10:54:45.635525  405191 logs.go:282] 0 containers: []
	W1206 10:54:45.635532  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:45.635538  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:45.635604  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:45.664995  405191 cri.go:89] found id: ""
	I1206 10:54:45.665008  405191 logs.go:282] 0 containers: []
	W1206 10:54:45.665015  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:45.665020  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:45.665077  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:45.691036  405191 cri.go:89] found id: ""
	I1206 10:54:45.691050  405191 logs.go:282] 0 containers: []
	W1206 10:54:45.691057  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:45.691062  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:45.691120  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:45.716374  405191 cri.go:89] found id: ""
	I1206 10:54:45.716388  405191 logs.go:282] 0 containers: []
	W1206 10:54:45.716395  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:45.716400  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:45.716461  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:45.742083  405191 cri.go:89] found id: ""
	I1206 10:54:45.742097  405191 logs.go:282] 0 containers: []
	W1206 10:54:45.742105  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:45.742110  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:45.742177  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:45.767269  405191 cri.go:89] found id: ""
	I1206 10:54:45.767282  405191 logs.go:282] 0 containers: []
	W1206 10:54:45.767290  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:45.767295  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:45.767352  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:45.793130  405191 cri.go:89] found id: ""
	I1206 10:54:45.793144  405191 logs.go:282] 0 containers: []
	W1206 10:54:45.793151  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:45.793158  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:45.793169  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:45.822623  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:45.822639  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:45.889014  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:45.889036  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:45.903697  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:45.903713  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:45.967833  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:45.959169   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:45.960025   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:45.961643   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:45.962228   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:45.963959   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:45.959169   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:45.960025   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:45.961643   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:45.962228   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:45.963959   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:45.967843  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:45.967854  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:48.539593  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:48.549488  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:48.549547  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:48.578962  405191 cri.go:89] found id: ""
	I1206 10:54:48.578976  405191 logs.go:282] 0 containers: []
	W1206 10:54:48.578983  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:48.578989  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:48.579060  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:48.604320  405191 cri.go:89] found id: ""
	I1206 10:54:48.604335  405191 logs.go:282] 0 containers: []
	W1206 10:54:48.604342  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:48.604347  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:48.604407  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:48.630562  405191 cri.go:89] found id: ""
	I1206 10:54:48.630575  405191 logs.go:282] 0 containers: []
	W1206 10:54:48.630583  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:48.630588  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:48.630645  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:48.659186  405191 cri.go:89] found id: ""
	I1206 10:54:48.659200  405191 logs.go:282] 0 containers: []
	W1206 10:54:48.659207  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:48.659218  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:48.659278  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:48.686349  405191 cri.go:89] found id: ""
	I1206 10:54:48.686363  405191 logs.go:282] 0 containers: []
	W1206 10:54:48.686371  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:48.686376  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:48.686433  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:48.712958  405191 cri.go:89] found id: ""
	I1206 10:54:48.712973  405191 logs.go:282] 0 containers: []
	W1206 10:54:48.712980  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:48.712985  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:48.713045  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:48.738763  405191 cri.go:89] found id: ""
	I1206 10:54:48.738777  405191 logs.go:282] 0 containers: []
	W1206 10:54:48.738783  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:48.738791  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:48.738801  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:48.753416  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:48.753431  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:48.818598  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:48.810121   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:48.810830   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:48.812598   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:48.813183   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:48.814760   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:48.810121   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:48.810830   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:48.812598   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:48.813183   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:48.814760   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:48.818609  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:48.818620  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:48.888023  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:48.888043  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:48.917094  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:48.917110  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:51.485627  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:51.497092  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:51.497157  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:51.529254  405191 cri.go:89] found id: ""
	I1206 10:54:51.529268  405191 logs.go:282] 0 containers: []
	W1206 10:54:51.529275  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:51.529281  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:51.529340  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:51.555292  405191 cri.go:89] found id: ""
	I1206 10:54:51.555305  405191 logs.go:282] 0 containers: []
	W1206 10:54:51.555312  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:51.555316  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:51.555390  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:51.580443  405191 cri.go:89] found id: ""
	I1206 10:54:51.580458  405191 logs.go:282] 0 containers: []
	W1206 10:54:51.580465  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:51.580470  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:51.580529  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:51.605907  405191 cri.go:89] found id: ""
	I1206 10:54:51.605921  405191 logs.go:282] 0 containers: []
	W1206 10:54:51.605928  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:51.605933  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:51.605991  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:51.630731  405191 cri.go:89] found id: ""
	I1206 10:54:51.630745  405191 logs.go:282] 0 containers: []
	W1206 10:54:51.630752  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:51.630757  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:51.630816  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:51.655906  405191 cri.go:89] found id: ""
	I1206 10:54:51.655919  405191 logs.go:282] 0 containers: []
	W1206 10:54:51.655926  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:51.655931  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:51.655987  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:51.681242  405191 cri.go:89] found id: ""
	I1206 10:54:51.681256  405191 logs.go:282] 0 containers: []
	W1206 10:54:51.681267  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:51.681275  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:51.681285  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:51.750829  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:51.750849  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:51.766064  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:51.766080  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:51.831905  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:51.823637   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:51.824299   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:51.825840   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:51.826394   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:51.827960   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:51.823637   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:51.824299   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:51.825840   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:51.826394   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:51.827960   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:51.831915  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:51.831925  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:51.901462  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:51.901484  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:54.431319  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:54.441623  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:54.441686  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:54.470441  405191 cri.go:89] found id: ""
	I1206 10:54:54.470456  405191 logs.go:282] 0 containers: []
	W1206 10:54:54.470463  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:54.470469  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:54.470527  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:54.505844  405191 cri.go:89] found id: ""
	I1206 10:54:54.505858  405191 logs.go:282] 0 containers: []
	W1206 10:54:54.505865  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:54.505870  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:54.505931  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:54.540765  405191 cri.go:89] found id: ""
	I1206 10:54:54.540779  405191 logs.go:282] 0 containers: []
	W1206 10:54:54.540786  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:54.540791  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:54.540859  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:54.568534  405191 cri.go:89] found id: ""
	I1206 10:54:54.568559  405191 logs.go:282] 0 containers: []
	W1206 10:54:54.568566  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:54.568571  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:54.568631  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:54.598488  405191 cri.go:89] found id: ""
	I1206 10:54:54.598501  405191 logs.go:282] 0 containers: []
	W1206 10:54:54.598508  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:54.598513  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:54.598573  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:54.625601  405191 cri.go:89] found id: ""
	I1206 10:54:54.625615  405191 logs.go:282] 0 containers: []
	W1206 10:54:54.625622  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:54.625627  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:54.625684  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:54.651039  405191 cri.go:89] found id: ""
	I1206 10:54:54.651053  405191 logs.go:282] 0 containers: []
	W1206 10:54:54.651069  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:54.651077  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:54.651088  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:54.721711  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:54.712700   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:54.713574   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:54.715366   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:54.715761   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:54.717298   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:54.712700   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:54.713574   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:54.715366   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:54.715761   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:54.717298   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:54.721724  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:54.721734  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:54.793778  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:54.793803  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:54.825565  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:54.825580  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:54.891107  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:54.891127  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:57.406177  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:57.416168  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:57.416231  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:57.444260  405191 cri.go:89] found id: ""
	I1206 10:54:57.444274  405191 logs.go:282] 0 containers: []
	W1206 10:54:57.444281  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:57.444286  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:57.444352  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:57.473921  405191 cri.go:89] found id: ""
	I1206 10:54:57.473935  405191 logs.go:282] 0 containers: []
	W1206 10:54:57.473942  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:57.473947  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:57.474006  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:57.507969  405191 cri.go:89] found id: ""
	I1206 10:54:57.507983  405191 logs.go:282] 0 containers: []
	W1206 10:54:57.507990  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:57.507995  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:57.508057  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:57.536405  405191 cri.go:89] found id: ""
	I1206 10:54:57.536420  405191 logs.go:282] 0 containers: []
	W1206 10:54:57.536428  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:57.536433  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:57.536502  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:57.564180  405191 cri.go:89] found id: ""
	I1206 10:54:57.564194  405191 logs.go:282] 0 containers: []
	W1206 10:54:57.564201  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:57.564206  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:57.564271  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:57.594665  405191 cri.go:89] found id: ""
	I1206 10:54:57.594679  405191 logs.go:282] 0 containers: []
	W1206 10:54:57.594687  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:57.594692  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:57.594751  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:57.627345  405191 cri.go:89] found id: ""
	I1206 10:54:57.627360  405191 logs.go:282] 0 containers: []
	W1206 10:54:57.627367  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:57.627398  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:57.627409  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:57.694026  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:57.694046  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:57.708621  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:57.708636  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:57.772743  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:57.764569   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:57.765305   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:57.766828   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:57.767291   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:57.768789   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:57.764569   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:57.765305   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:57.766828   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:57.767291   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:57.768789   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:57.772753  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:57.772764  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:57.841816  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:57.841836  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:00.375636  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:00.396560  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:00.396634  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:00.458455  405191 cri.go:89] found id: ""
	I1206 10:55:00.458471  405191 logs.go:282] 0 containers: []
	W1206 10:55:00.458479  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:00.458485  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:00.458553  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:00.497287  405191 cri.go:89] found id: ""
	I1206 10:55:00.497304  405191 logs.go:282] 0 containers: []
	W1206 10:55:00.497311  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:00.497317  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:00.497382  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:00.531076  405191 cri.go:89] found id: ""
	I1206 10:55:00.531092  405191 logs.go:282] 0 containers: []
	W1206 10:55:00.531099  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:00.531104  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:00.531172  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:00.567464  405191 cri.go:89] found id: ""
	I1206 10:55:00.567485  405191 logs.go:282] 0 containers: []
	W1206 10:55:00.567493  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:00.567499  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:00.567600  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:00.600497  405191 cri.go:89] found id: ""
	I1206 10:55:00.600512  405191 logs.go:282] 0 containers: []
	W1206 10:55:00.600520  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:00.600526  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:00.600596  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:00.648830  405191 cri.go:89] found id: ""
	I1206 10:55:00.648852  405191 logs.go:282] 0 containers: []
	W1206 10:55:00.648861  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:00.648868  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:00.648939  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:00.678773  405191 cri.go:89] found id: ""
	I1206 10:55:00.678789  405191 logs.go:282] 0 containers: []
	W1206 10:55:00.678797  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:00.678822  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:00.678834  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:00.748615  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:00.748637  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:00.764401  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:00.764420  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:00.836152  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:00.827231   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:00.828085   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:00.830005   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:00.830399   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:00.832026   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:00.827231   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:00.828085   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:00.830005   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:00.830399   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:00.832026   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:00.836163  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:00.836174  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:00.909732  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:00.909761  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:03.441095  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:03.451635  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:03.451701  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:03.486201  405191 cri.go:89] found id: ""
	I1206 10:55:03.486214  405191 logs.go:282] 0 containers: []
	W1206 10:55:03.486222  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:03.486226  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:03.486286  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:03.530153  405191 cri.go:89] found id: ""
	I1206 10:55:03.530167  405191 logs.go:282] 0 containers: []
	W1206 10:55:03.530174  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:03.530179  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:03.530243  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:03.559790  405191 cri.go:89] found id: ""
	I1206 10:55:03.559804  405191 logs.go:282] 0 containers: []
	W1206 10:55:03.559811  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:03.559816  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:03.559874  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:03.586392  405191 cri.go:89] found id: ""
	I1206 10:55:03.586406  405191 logs.go:282] 0 containers: []
	W1206 10:55:03.586413  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:03.586418  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:03.586477  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:03.612699  405191 cri.go:89] found id: ""
	I1206 10:55:03.612714  405191 logs.go:282] 0 containers: []
	W1206 10:55:03.612726  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:03.612732  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:03.612827  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:03.641895  405191 cri.go:89] found id: ""
	I1206 10:55:03.641909  405191 logs.go:282] 0 containers: []
	W1206 10:55:03.641916  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:03.641921  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:03.641978  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:03.668194  405191 cri.go:89] found id: ""
	I1206 10:55:03.668208  405191 logs.go:282] 0 containers: []
	W1206 10:55:03.668216  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:03.668224  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:03.668234  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:03.738567  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:03.738585  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:03.753715  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:03.753732  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:03.819356  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:03.811487   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:03.812006   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:03.813500   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:03.813921   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:03.815528   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:03.811487   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:03.812006   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:03.813500   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:03.813921   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:03.815528   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:03.819368  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:03.819393  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:03.888845  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:03.888866  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:06.421279  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:06.431630  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:06.431691  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:06.457432  405191 cri.go:89] found id: ""
	I1206 10:55:06.457446  405191 logs.go:282] 0 containers: []
	W1206 10:55:06.457453  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:06.457458  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:06.457525  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:06.498897  405191 cri.go:89] found id: ""
	I1206 10:55:06.498911  405191 logs.go:282] 0 containers: []
	W1206 10:55:06.498918  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:06.498923  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:06.498994  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:06.532288  405191 cri.go:89] found id: ""
	I1206 10:55:06.532320  405191 logs.go:282] 0 containers: []
	W1206 10:55:06.532328  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:06.532332  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:06.532403  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:06.558737  405191 cri.go:89] found id: ""
	I1206 10:55:06.558751  405191 logs.go:282] 0 containers: []
	W1206 10:55:06.558758  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:06.558764  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:06.558835  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:06.588791  405191 cri.go:89] found id: ""
	I1206 10:55:06.588805  405191 logs.go:282] 0 containers: []
	W1206 10:55:06.588813  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:06.588818  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:06.588887  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:06.615097  405191 cri.go:89] found id: ""
	I1206 10:55:06.615110  405191 logs.go:282] 0 containers: []
	W1206 10:55:06.615117  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:06.615122  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:06.615182  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:06.640273  405191 cri.go:89] found id: ""
	I1206 10:55:06.640297  405191 logs.go:282] 0 containers: []
	W1206 10:55:06.640305  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:06.640312  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:06.640323  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:06.709781  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:06.709800  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:06.724307  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:06.724323  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:06.788894  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:06.780020   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:06.780621   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:06.782266   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:06.782823   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:06.784380   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:06.780020   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:06.780621   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:06.782266   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:06.782823   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:06.784380   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:06.788903  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:06.788913  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:06.857942  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:06.857963  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:09.392819  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:09.402617  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:09.402675  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:09.429928  405191 cri.go:89] found id: ""
	I1206 10:55:09.429942  405191 logs.go:282] 0 containers: []
	W1206 10:55:09.429949  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:09.429955  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:09.430018  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:09.455893  405191 cri.go:89] found id: ""
	I1206 10:55:09.455907  405191 logs.go:282] 0 containers: []
	W1206 10:55:09.455913  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:09.455918  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:09.455975  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:09.492759  405191 cri.go:89] found id: ""
	I1206 10:55:09.492772  405191 logs.go:282] 0 containers: []
	W1206 10:55:09.492779  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:09.492784  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:09.492842  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:09.524405  405191 cri.go:89] found id: ""
	I1206 10:55:09.524418  405191 logs.go:282] 0 containers: []
	W1206 10:55:09.524425  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:09.524430  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:09.524488  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:09.555465  405191 cri.go:89] found id: ""
	I1206 10:55:09.555479  405191 logs.go:282] 0 containers: []
	W1206 10:55:09.555486  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:09.555491  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:09.555551  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:09.582561  405191 cri.go:89] found id: ""
	I1206 10:55:09.582575  405191 logs.go:282] 0 containers: []
	W1206 10:55:09.582582  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:09.582588  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:09.582646  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:09.608767  405191 cri.go:89] found id: ""
	I1206 10:55:09.608781  405191 logs.go:282] 0 containers: []
	W1206 10:55:09.608788  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:09.608796  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:09.608810  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:09.677518  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:09.677539  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:09.692935  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:09.692955  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:09.760066  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:09.750973   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:09.751783   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:09.753612   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:09.754387   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:09.755955   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:09.750973   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:09.751783   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:09.753612   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:09.754387   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:09.755955   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:09.760077  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:09.760087  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:09.829605  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:09.829626  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:12.359607  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:12.370647  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:12.370708  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:12.402338  405191 cri.go:89] found id: ""
	I1206 10:55:12.402353  405191 logs.go:282] 0 containers: []
	W1206 10:55:12.402361  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:12.402366  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:12.402435  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:12.428498  405191 cri.go:89] found id: ""
	I1206 10:55:12.428513  405191 logs.go:282] 0 containers: []
	W1206 10:55:12.428520  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:12.428525  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:12.428587  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:12.454311  405191 cri.go:89] found id: ""
	I1206 10:55:12.454325  405191 logs.go:282] 0 containers: []
	W1206 10:55:12.454333  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:12.454338  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:12.454399  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:12.493402  405191 cri.go:89] found id: ""
	I1206 10:55:12.493416  405191 logs.go:282] 0 containers: []
	W1206 10:55:12.493423  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:12.493429  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:12.493487  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:12.527015  405191 cri.go:89] found id: ""
	I1206 10:55:12.527029  405191 logs.go:282] 0 containers: []
	W1206 10:55:12.527036  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:12.527042  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:12.527103  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:12.556788  405191 cri.go:89] found id: ""
	I1206 10:55:12.556812  405191 logs.go:282] 0 containers: []
	W1206 10:55:12.556820  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:12.556825  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:12.556897  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:12.584336  405191 cri.go:89] found id: ""
	I1206 10:55:12.584350  405191 logs.go:282] 0 containers: []
	W1206 10:55:12.584357  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:12.584365  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:12.584376  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:12.614039  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:12.614055  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:12.680316  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:12.680338  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:12.696525  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:12.696542  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:12.760110  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:12.751882   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:12.752591   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:12.754143   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:12.754484   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:12.756046   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:12.751882   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:12.752591   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:12.754143   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:12.754484   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:12.756046   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:12.760120  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:12.760131  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:15.332168  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:15.342873  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:15.342950  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:15.371175  405191 cri.go:89] found id: ""
	I1206 10:55:15.371189  405191 logs.go:282] 0 containers: []
	W1206 10:55:15.371207  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:15.371212  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:15.371279  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:15.397085  405191 cri.go:89] found id: ""
	I1206 10:55:15.397100  405191 logs.go:282] 0 containers: []
	W1206 10:55:15.397107  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:15.397112  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:15.397171  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:15.422142  405191 cri.go:89] found id: ""
	I1206 10:55:15.422156  405191 logs.go:282] 0 containers: []
	W1206 10:55:15.422163  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:15.422174  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:15.422231  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:15.447127  405191 cri.go:89] found id: ""
	I1206 10:55:15.447141  405191 logs.go:282] 0 containers: []
	W1206 10:55:15.447148  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:15.447154  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:15.447212  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:15.477786  405191 cri.go:89] found id: ""
	I1206 10:55:15.477800  405191 logs.go:282] 0 containers: []
	W1206 10:55:15.477808  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:15.477813  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:15.477875  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:15.507270  405191 cri.go:89] found id: ""
	I1206 10:55:15.507285  405191 logs.go:282] 0 containers: []
	W1206 10:55:15.507292  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:15.507297  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:15.507360  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:15.536433  405191 cri.go:89] found id: ""
	I1206 10:55:15.536451  405191 logs.go:282] 0 containers: []
	W1206 10:55:15.536458  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:15.536470  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:15.536480  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:15.608040  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:15.608061  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:15.623617  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:15.623635  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:15.692548  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:15.684603   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:15.685140   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:15.686901   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:15.687564   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:15.688573   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:15.684603   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:15.685140   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:15.686901   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:15.687564   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:15.688573   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:15.692558  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:15.692581  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:15.760517  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:15.760537  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:18.289173  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:18.300544  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:18.300610  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:18.327678  405191 cri.go:89] found id: ""
	I1206 10:55:18.327692  405191 logs.go:282] 0 containers: []
	W1206 10:55:18.327699  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:18.327704  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:18.327764  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:18.353999  405191 cri.go:89] found id: ""
	I1206 10:55:18.354014  405191 logs.go:282] 0 containers: []
	W1206 10:55:18.354021  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:18.354026  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:18.354084  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:18.382276  405191 cri.go:89] found id: ""
	I1206 10:55:18.382291  405191 logs.go:282] 0 containers: []
	W1206 10:55:18.382298  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:18.382304  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:18.382365  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:18.410827  405191 cri.go:89] found id: ""
	I1206 10:55:18.410841  405191 logs.go:282] 0 containers: []
	W1206 10:55:18.410847  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:18.410852  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:18.410911  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:18.436138  405191 cri.go:89] found id: ""
	I1206 10:55:18.436160  405191 logs.go:282] 0 containers: []
	W1206 10:55:18.436167  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:18.436172  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:18.436233  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:18.462254  405191 cri.go:89] found id: ""
	I1206 10:55:18.462269  405191 logs.go:282] 0 containers: []
	W1206 10:55:18.462276  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:18.462283  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:18.462346  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:18.492347  405191 cri.go:89] found id: ""
	I1206 10:55:18.492362  405191 logs.go:282] 0 containers: []
	W1206 10:55:18.492369  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:18.492377  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:18.492388  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:18.509956  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:18.509973  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:18.581031  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:18.572020   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:18.572812   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:18.573929   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:18.574912   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:18.575787   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:18.572020   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:18.572812   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:18.573929   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:18.574912   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:18.575787   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:18.581041  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:18.581055  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:18.650942  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:18.650963  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:18.680668  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:18.680685  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:21.248379  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:21.258903  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:21.258982  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:21.286273  405191 cri.go:89] found id: ""
	I1206 10:55:21.286288  405191 logs.go:282] 0 containers: []
	W1206 10:55:21.286295  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:21.286300  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:21.286357  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:21.311824  405191 cri.go:89] found id: ""
	I1206 10:55:21.311841  405191 logs.go:282] 0 containers: []
	W1206 10:55:21.311851  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:21.311857  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:21.311923  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:21.338690  405191 cri.go:89] found id: ""
	I1206 10:55:21.338704  405191 logs.go:282] 0 containers: []
	W1206 10:55:21.338711  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:21.338716  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:21.338773  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:21.365841  405191 cri.go:89] found id: ""
	I1206 10:55:21.365855  405191 logs.go:282] 0 containers: []
	W1206 10:55:21.365862  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:21.365868  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:21.365926  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:21.396001  405191 cri.go:89] found id: ""
	I1206 10:55:21.396035  405191 logs.go:282] 0 containers: []
	W1206 10:55:21.396043  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:21.396049  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:21.396118  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:21.421823  405191 cri.go:89] found id: ""
	I1206 10:55:21.421837  405191 logs.go:282] 0 containers: []
	W1206 10:55:21.421856  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:21.421862  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:21.421934  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:21.449590  405191 cri.go:89] found id: ""
	I1206 10:55:21.449604  405191 logs.go:282] 0 containers: []
	W1206 10:55:21.449611  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:21.449619  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:21.449631  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:21.464618  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:21.464634  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:21.543901  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:21.526696   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:21.535561   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:21.536267   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:21.537985   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:21.538498   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:21.526696   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:21.535561   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:21.536267   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:21.537985   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:21.538498   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:21.543913  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:21.543926  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:21.614646  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:21.614669  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:21.645809  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:21.645825  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:24.214037  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:24.226008  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:24.226071  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:24.252473  405191 cri.go:89] found id: ""
	I1206 10:55:24.252487  405191 logs.go:282] 0 containers: []
	W1206 10:55:24.252495  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:24.252500  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:24.252560  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:24.280242  405191 cri.go:89] found id: ""
	I1206 10:55:24.280256  405191 logs.go:282] 0 containers: []
	W1206 10:55:24.280263  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:24.280268  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:24.280328  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:24.307083  405191 cri.go:89] found id: ""
	I1206 10:55:24.307098  405191 logs.go:282] 0 containers: []
	W1206 10:55:24.307105  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:24.307111  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:24.307181  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:24.333215  405191 cri.go:89] found id: ""
	I1206 10:55:24.333230  405191 logs.go:282] 0 containers: []
	W1206 10:55:24.333239  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:24.333245  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:24.333312  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:24.364248  405191 cri.go:89] found id: ""
	I1206 10:55:24.364262  405191 logs.go:282] 0 containers: []
	W1206 10:55:24.364269  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:24.364275  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:24.364340  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:24.392539  405191 cri.go:89] found id: ""
	I1206 10:55:24.392554  405191 logs.go:282] 0 containers: []
	W1206 10:55:24.392561  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:24.392567  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:24.392631  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:24.419045  405191 cri.go:89] found id: ""
	I1206 10:55:24.419059  405191 logs.go:282] 0 containers: []
	W1206 10:55:24.419066  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:24.419074  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:24.419084  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:24.485101  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:24.485123  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:24.506235  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:24.506258  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:24.586208  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:24.577740   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:24.578227   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:24.579907   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:24.580253   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:24.581928   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:24.577740   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:24.578227   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:24.579907   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:24.580253   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:24.581928   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:24.586218  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:24.586230  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:24.654219  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:24.654241  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:27.183198  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:27.194048  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:27.194116  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:27.223948  405191 cri.go:89] found id: ""
	I1206 10:55:27.223962  405191 logs.go:282] 0 containers: []
	W1206 10:55:27.223969  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:27.223974  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:27.224033  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:27.255792  405191 cri.go:89] found id: ""
	I1206 10:55:27.255807  405191 logs.go:282] 0 containers: []
	W1206 10:55:27.255814  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:27.255819  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:27.255882  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:27.285352  405191 cri.go:89] found id: ""
	I1206 10:55:27.285365  405191 logs.go:282] 0 containers: []
	W1206 10:55:27.285373  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:27.285380  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:27.285438  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:27.311572  405191 cri.go:89] found id: ""
	I1206 10:55:27.311599  405191 logs.go:282] 0 containers: []
	W1206 10:55:27.311606  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:27.311612  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:27.311684  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:27.337727  405191 cri.go:89] found id: ""
	I1206 10:55:27.337741  405191 logs.go:282] 0 containers: []
	W1206 10:55:27.337747  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:27.337753  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:27.337812  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:27.363513  405191 cri.go:89] found id: ""
	I1206 10:55:27.363527  405191 logs.go:282] 0 containers: []
	W1206 10:55:27.363534  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:27.363539  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:27.363611  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:27.390072  405191 cri.go:89] found id: ""
	I1206 10:55:27.390100  405191 logs.go:282] 0 containers: []
	W1206 10:55:27.390107  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:27.390115  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:27.390130  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:27.456548  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:27.456567  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:27.472626  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:27.472642  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:27.554055  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:27.545736   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:27.546254   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:27.548095   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:27.548446   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:27.550070   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:27.545736   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:27.546254   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:27.548095   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:27.548446   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:27.550070   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:27.554065  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:27.554076  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:27.622961  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:27.622984  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:30.156731  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:30.168052  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:30.168115  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:30.195190  405191 cri.go:89] found id: ""
	I1206 10:55:30.195205  405191 logs.go:282] 0 containers: []
	W1206 10:55:30.195237  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:30.195243  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:30.195315  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:30.222581  405191 cri.go:89] found id: ""
	I1206 10:55:30.222615  405191 logs.go:282] 0 containers: []
	W1206 10:55:30.222622  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:30.222628  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:30.222697  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:30.251144  405191 cri.go:89] found id: ""
	I1206 10:55:30.251162  405191 logs.go:282] 0 containers: []
	W1206 10:55:30.251173  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:30.251178  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:30.251280  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:30.282704  405191 cri.go:89] found id: ""
	I1206 10:55:30.282731  405191 logs.go:282] 0 containers: []
	W1206 10:55:30.282739  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:30.282744  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:30.282818  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:30.308787  405191 cri.go:89] found id: ""
	I1206 10:55:30.308802  405191 logs.go:282] 0 containers: []
	W1206 10:55:30.308809  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:30.308814  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:30.308881  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:30.334479  405191 cri.go:89] found id: ""
	I1206 10:55:30.334494  405191 logs.go:282] 0 containers: []
	W1206 10:55:30.334501  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:30.334507  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:30.334582  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:30.361350  405191 cri.go:89] found id: ""
	I1206 10:55:30.361365  405191 logs.go:282] 0 containers: []
	W1206 10:55:30.361372  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:30.361380  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:30.361390  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:30.438089  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:30.438120  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:30.453200  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:30.453217  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:30.539250  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:30.524592   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:30.527641   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:30.528089   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:30.529752   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:30.530427   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:30.524592   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:30.527641   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:30.528089   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:30.529752   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:30.530427   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:30.539272  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:30.539285  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:30.610101  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:30.610121  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:33.143484  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:33.153906  405191 kubeadm.go:602] duration metric: took 4m2.63956924s to restartPrimaryControlPlane
	W1206 10:55:33.153970  405191 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1206 10:55:33.154044  405191 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1206 10:55:33.564051  405191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:55:33.577264  405191 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 10:55:33.585285  405191 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:55:33.585343  405191 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:55:33.593207  405191 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:55:33.593217  405191 kubeadm.go:158] found existing configuration files:
	
	I1206 10:55:33.593284  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:55:33.601281  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:55:33.601338  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:55:33.609078  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:55:33.617336  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:55:33.617395  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:55:33.625100  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:55:33.633096  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:55:33.633153  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:55:33.640767  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:55:33.648692  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:55:33.648783  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:55:33.656355  405191 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:55:33.695114  405191 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 10:55:33.695495  405191 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:55:33.776558  405191 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:55:33.776622  405191 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:55:33.776656  405191 kubeadm.go:319] OS: Linux
	I1206 10:55:33.776700  405191 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:55:33.776747  405191 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:55:33.776793  405191 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:55:33.776839  405191 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:55:33.776886  405191 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:55:33.776933  405191 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:55:33.776976  405191 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:55:33.777023  405191 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:55:33.777067  405191 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:55:33.839562  405191 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:55:33.839700  405191 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:55:33.839825  405191 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:55:33.847872  405191 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:55:33.851528  405191 out.go:252]   - Generating certificates and keys ...
	I1206 10:55:33.851642  405191 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:55:33.851732  405191 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:55:33.851823  405191 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 10:55:33.851888  405191 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 10:55:33.851963  405191 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 10:55:33.852020  405191 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 10:55:33.852092  405191 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 10:55:33.852157  405191 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 10:55:33.852236  405191 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 10:55:33.852314  405191 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 10:55:33.852354  405191 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 10:55:33.852412  405191 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:55:34.131310  405191 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:55:34.288855  405191 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:55:34.553487  405191 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:55:35.148231  405191 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:55:35.211116  405191 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:55:35.211864  405191 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:55:35.214714  405191 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:55:35.218231  405191 out.go:252]   - Booting up control plane ...
	I1206 10:55:35.218330  405191 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:55:35.218406  405191 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:55:35.218472  405191 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:55:35.235870  405191 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:55:35.235976  405191 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:55:35.244902  405191 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:55:35.245320  405191 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:55:35.245379  405191 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:55:35.375634  405191 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:55:35.375747  405191 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:59:35.374512  405191 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000270227s
	I1206 10:59:35.374544  405191 kubeadm.go:319] 
	I1206 10:59:35.374605  405191 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 10:59:35.374643  405191 kubeadm.go:319] 	- The kubelet is not running
	I1206 10:59:35.374758  405191 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 10:59:35.374763  405191 kubeadm.go:319] 
	I1206 10:59:35.374876  405191 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 10:59:35.374910  405191 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 10:59:35.374942  405191 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 10:59:35.374945  405191 kubeadm.go:319] 
	I1206 10:59:35.380563  405191 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 10:59:35.380998  405191 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 10:59:35.381115  405191 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:59:35.381348  405191 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 10:59:35.381353  405191 kubeadm.go:319] 
	I1206 10:59:35.381420  405191 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1206 10:59:35.381523  405191 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000270227s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1206 10:59:35.381613  405191 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1206 10:59:35.796714  405191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:59:35.809334  405191 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:59:35.809388  405191 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:59:35.817444  405191 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:59:35.817452  405191 kubeadm.go:158] found existing configuration files:
	
	I1206 10:59:35.817502  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:59:35.825442  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:59:35.825501  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:59:35.833082  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:59:35.842093  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:59:35.842159  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:59:35.851759  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:59:35.860099  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:59:35.860161  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:59:35.867900  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:59:35.876130  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:59:35.876188  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:59:35.884013  405191 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:59:35.926383  405191 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 10:59:35.926438  405191 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:59:36.016832  405191 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:59:36.016925  405191 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:59:36.016974  405191 kubeadm.go:319] OS: Linux
	I1206 10:59:36.017019  405191 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:59:36.017071  405191 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:59:36.017119  405191 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:59:36.017173  405191 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:59:36.017220  405191 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:59:36.017277  405191 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:59:36.017339  405191 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:59:36.017401  405191 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:59:36.017447  405191 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:59:36.080832  405191 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:59:36.080951  405191 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:59:36.081048  405191 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:59:36.091906  405191 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:59:36.097223  405191 out.go:252]   - Generating certificates and keys ...
	I1206 10:59:36.097345  405191 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:59:36.097426  405191 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:59:36.097511  405191 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 10:59:36.097596  405191 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 10:59:36.097675  405191 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 10:59:36.097750  405191 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 10:59:36.097815  405191 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 10:59:36.097876  405191 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 10:59:36.097954  405191 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 10:59:36.098026  405191 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 10:59:36.098063  405191 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 10:59:36.098122  405191 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:59:36.705762  405191 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:59:36.885173  405191 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:59:37.204953  405191 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:59:37.715956  405191 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:59:37.848965  405191 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:59:37.849735  405191 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:59:37.853600  405191 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:59:37.856590  405191 out.go:252]   - Booting up control plane ...
	I1206 10:59:37.856698  405191 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:59:37.856819  405191 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:59:37.858671  405191 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:59:37.873039  405191 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:59:37.873143  405191 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:59:37.880838  405191 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:59:37.881129  405191 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:59:37.881370  405191 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:59:38.015956  405191 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:59:38.016070  405191 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 11:03:38.011572  405191 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000448393s
	I1206 11:03:38.011605  405191 kubeadm.go:319] 
	I1206 11:03:38.011721  405191 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 11:03:38.011777  405191 kubeadm.go:319] 	- The kubelet is not running
	I1206 11:03:38.012051  405191 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 11:03:38.012060  405191 kubeadm.go:319] 
	I1206 11:03:38.012421  405191 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 11:03:38.012573  405191 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 11:03:38.012628  405191 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 11:03:38.012633  405191 kubeadm.go:319] 
	I1206 11:03:38.018189  405191 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 11:03:38.018608  405191 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 11:03:38.018716  405191 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 11:03:38.018960  405191 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 11:03:38.018965  405191 kubeadm.go:319] 
	I1206 11:03:38.019033  405191 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 11:03:38.019089  405191 kubeadm.go:403] duration metric: took 12m7.551905569s to StartCluster
	I1206 11:03:38.019121  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:03:38.019191  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:03:38.048894  405191 cri.go:89] found id: ""
	I1206 11:03:38.048909  405191 logs.go:282] 0 containers: []
	W1206 11:03:38.048917  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:03:38.048922  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:03:38.049009  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:03:38.077125  405191 cri.go:89] found id: ""
	I1206 11:03:38.077141  405191 logs.go:282] 0 containers: []
	W1206 11:03:38.077149  405191 logs.go:284] No container was found matching "etcd"
	I1206 11:03:38.077154  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:03:38.077229  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:03:38.104859  405191 cri.go:89] found id: ""
	I1206 11:03:38.104873  405191 logs.go:282] 0 containers: []
	W1206 11:03:38.104881  405191 logs.go:284] No container was found matching "coredns"
	I1206 11:03:38.104886  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:03:38.104946  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:03:38.131268  405191 cri.go:89] found id: ""
	I1206 11:03:38.131282  405191 logs.go:282] 0 containers: []
	W1206 11:03:38.131289  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:03:38.131295  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:03:38.131356  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:03:38.161469  405191 cri.go:89] found id: ""
	I1206 11:03:38.161483  405191 logs.go:282] 0 containers: []
	W1206 11:03:38.161490  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:03:38.161495  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:03:38.161555  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:03:38.191440  405191 cri.go:89] found id: ""
	I1206 11:03:38.191454  405191 logs.go:282] 0 containers: []
	W1206 11:03:38.191461  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:03:38.191467  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:03:38.191536  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:03:38.219921  405191 cri.go:89] found id: ""
	I1206 11:03:38.219935  405191 logs.go:282] 0 containers: []
	W1206 11:03:38.219943  405191 logs.go:284] No container was found matching "kindnet"
	I1206 11:03:38.219951  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:03:38.219962  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:03:38.285137  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:03:38.277076   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:38.277519   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:38.279007   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:38.279647   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:38.281164   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:03:38.277076   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:38.277519   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:38.279007   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:38.279647   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:38.281164   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:03:38.285157  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:03:38.285169  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:03:38.355235  405191 logs.go:123] Gathering logs for container status ...
	I1206 11:03:38.355259  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:03:38.391661  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 11:03:38.391679  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:03:38.462714  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 11:03:38.462733  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1206 11:03:38.480853  405191 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000448393s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 11:03:38.480894  405191 out.go:285] * 
	W1206 11:03:38.480951  405191 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000448393s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 11:03:38.480964  405191 out.go:285] * 
	W1206 11:03:38.483093  405191 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 11:03:38.488282  405191 out.go:203] 
	W1206 11:03:38.491978  405191 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000448393s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 11:03:38.492089  405191 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 11:03:38.492161  405191 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 11:03:38.495164  405191 out.go:203] 
	
	
	==> CRI-O <==
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.084499828Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=f0fd0946-b323-435f-946c-e412850eb9c0 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.085495997Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=34b6bb47-44a4-4780-9567-04c497973fa7 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.08608111Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=e26253f3-5094-4fbe-b6d1-306f2e31fa9a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.086661177Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=f8a4ffa3-a2d3-4c05-ba97-fd167ad1ff4e name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.087187984Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=43819321-fbd2-4155-9f0b-c716c27fc9ce name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.087957288Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=aeb2085e-c4e7-4d42-9049-5042f515cbdb name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.088481658Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=0f01a640-f6a6-41dd-afc3-f5cae208f89a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.079725515Z" level=info msg="Checking image status: kicbase/echo-server:functional-196950" id=f87ee22d-4408-46c2-8930-0ab8ba7ffa52 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.079908966Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.079954112Z" level=info msg="Image kicbase/echo-server:functional-196950 not found" id=f87ee22d-4408-46c2-8930-0ab8ba7ffa52 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.080016438Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-196950 found" id=f87ee22d-4408-46c2-8930-0ab8ba7ffa52 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.114067912Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-196950" id=da9dda5e-17fc-43e5-a93c-9057adb4fa98 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.114216122Z" level=info msg="Image docker.io/kicbase/echo-server:functional-196950 not found" id=da9dda5e-17fc-43e5-a93c-9057adb4fa98 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.114253587Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-196950 found" id=da9dda5e-17fc-43e5-a93c-9057adb4fa98 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.143186327Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-196950" id=629f3d91-fd66-4652-88b6-cbe010464984 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.143320859Z" level=info msg="Image localhost/kicbase/echo-server:functional-196950 not found" id=629f3d91-fd66-4652-88b6-cbe010464984 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.143363608Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-196950 found" id=629f3d91-fd66-4652-88b6-cbe010464984 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:52 functional-196950 crio[9931]: time="2025-12-06T11:03:52.005475943Z" level=info msg="Checking image status: kicbase/echo-server:functional-196950" id=c24c3057-b5d3-4f9d-ac3b-5bbc48ff2411 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:52 functional-196950 crio[9931]: time="2025-12-06T11:03:52.005746649Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 06 11:03:52 functional-196950 crio[9931]: time="2025-12-06T11:03:52.005815245Z" level=info msg="Image kicbase/echo-server:functional-196950 not found" id=c24c3057-b5d3-4f9d-ac3b-5bbc48ff2411 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:52 functional-196950 crio[9931]: time="2025-12-06T11:03:52.005906667Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-196950 found" id=c24c3057-b5d3-4f9d-ac3b-5bbc48ff2411 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:52 functional-196950 crio[9931]: time="2025-12-06T11:03:52.062141238Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-196950" id=2c21deca-fd70-4153-95f4-22abbc99a70a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:52 functional-196950 crio[9931]: time="2025-12-06T11:03:52.062283549Z" level=info msg="Image docker.io/kicbase/echo-server:functional-196950 not found" id=2c21deca-fd70-4153-95f4-22abbc99a70a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:52 functional-196950 crio[9931]: time="2025-12-06T11:03:52.062324296Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-196950 found" id=2c21deca-fd70-4153-95f4-22abbc99a70a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:52 functional-196950 crio[9931]: time="2025-12-06T11:03:52.089472813Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-196950" id=a99d8ece-a491-4b6e-b578-7c4c50168ae2 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:05:33.027602   23460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:05:33.028576   23460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:05:33.030191   23460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:05:33.030674   23460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:05:33.032250   23460 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014752] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503231] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.065820] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.901896] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.944603] kauditd_printk_skb: 39 callbacks suppressed
	[Dec 6 09:04] hrtimer: interrupt took 32230230 ns
	[Dec 6 10:25] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 6 10:26] overlayfs: idmapped layers are currently not supported
	[  +0.066821] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 6 10:32] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:33] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:51] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 11:05:33 up  2:48,  0 user,  load average: 0.44, 0.37, 0.53
	Linux functional-196950 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 11:05:30 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:05:31 functional-196950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1952.
	Dec 06 11:05:31 functional-196950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:05:31 functional-196950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:05:31 functional-196950 kubelet[23320]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:05:31 functional-196950 kubelet[23320]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:05:31 functional-196950 kubelet[23320]: E1206 11:05:31.286456   23320 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:05:31 functional-196950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:05:31 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:05:31 functional-196950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1953.
	Dec 06 11:05:31 functional-196950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:05:31 functional-196950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:05:32 functional-196950 kubelet[23355]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:05:32 functional-196950 kubelet[23355]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:05:32 functional-196950 kubelet[23355]: E1206 11:05:32.033536   23355 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:05:32 functional-196950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:05:32 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:05:32 functional-196950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1954.
	Dec 06 11:05:32 functional-196950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:05:32 functional-196950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:05:32 functional-196950 kubelet[23389]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:05:32 functional-196950 kubelet[23389]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:05:32 functional-196950 kubelet[23389]: E1206 11:05:32.784724   23389 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:05:32 functional-196950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:05:32 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-196950 -n functional-196950
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-196950 -n functional-196950: exit status 2 (417.634365ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-196950" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-196950 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-196950 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (51.084656ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-196950 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-196950 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-196950 describe po hello-node-connect: exit status 1 (57.75725ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-196950 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-196950 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-196950 logs -l app=hello-node-connect: exit status 1 (58.639376ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-196950 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-196950 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-196950 describe svc hello-node-connect: exit status 1 (67.277141ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-196950 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-196950
helpers_test.go:243: (dbg) docker inspect functional-196950:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1",
	        "Created": "2025-12-06T10:36:45.201779678Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 393848,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T10:36:45.318229053Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/hostname",
	        "HostsPath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/hosts",
	        "LogPath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1-json.log",
	        "Name": "/functional-196950",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-196950:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-196950",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1",
	                "LowerDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1-init/diff:/var/lib/docker/overlay2/5011226d55616c9977b14c1fe617d1302fe59373df05ce8ec6e21b79143a1c57/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-196950",
	                "Source": "/var/lib/docker/volumes/functional-196950/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-196950",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-196950",
	                "name.minikube.sigs.k8s.io": "functional-196950",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9b8f961d55d7529aed7b841f2ac9f818c22ff12b8ad73f2d6bcee22656d9749a",
	            "SandboxKey": "/var/run/docker/netns/9b8f961d55d7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33158"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-196950": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:c1:40:2a:93:47",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a566bfdfd33a868cf61e5b18b36cbd55e9868f24cbb091e055ae606aeb8c6f03",
	                    "EndpointID": "452fe32bde0c42c4c35d700488ae93aeecc6c6a971ac6f1a8a492dbc4b328ed9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-196950",
	                        "d150aac7296d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-196950 -n functional-196950
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-196950 -n functional-196950: exit status 2 (347.252695ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-196950 image ls                                                                                                                                │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │ 06 Dec 25 11:03 UTC │
	│ ssh     │ functional-196950 ssh sudo cat /usr/share/ca-certificates/364855.pem                                                                                      │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │ 06 Dec 25 11:03 UTC │
	│ image   │ functional-196950 image save kicbase/echo-server:functional-196950 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │ 06 Dec 25 11:03 UTC │
	│ ssh     │ functional-196950 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │ 06 Dec 25 11:03 UTC │
	│ image   │ functional-196950 image rm kicbase/echo-server:functional-196950 --alsologtostderr                                                                        │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │ 06 Dec 25 11:03 UTC │
	│ ssh     │ functional-196950 ssh sudo cat /etc/ssl/certs/3648552.pem                                                                                                 │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │ 06 Dec 25 11:03 UTC │
	│ ssh     │ functional-196950 ssh sudo cat /usr/share/ca-certificates/3648552.pem                                                                                     │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │ 06 Dec 25 11:03 UTC │
	│ image   │ functional-196950 image ls                                                                                                                                │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │ 06 Dec 25 11:03 UTC │
	│ image   │ functional-196950 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │ 06 Dec 25 11:03 UTC │
	│ ssh     │ functional-196950 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │ 06 Dec 25 11:03 UTC │
	│ ssh     │ functional-196950 ssh sudo cat /etc/test/nested/copy/364855/hosts                                                                                         │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │ 06 Dec 25 11:03 UTC │
	│ image   │ functional-196950 image ls                                                                                                                                │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │ 06 Dec 25 11:03 UTC │
	│ service │ functional-196950 service list                                                                                                                            │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │                     │
	│ image   │ functional-196950 image save --daemon kicbase/echo-server:functional-196950 --alsologtostderr                                                             │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │ 06 Dec 25 11:03 UTC │
	│ service │ functional-196950 service list -o json                                                                                                                    │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │                     │
	│ ssh     │ functional-196950 ssh echo hello                                                                                                                          │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │ 06 Dec 25 11:03 UTC │
	│ service │ functional-196950 service --namespace=default --https --url hello-node                                                                                    │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │                     │
	│ ssh     │ functional-196950 ssh cat /etc/hostname                                                                                                                   │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │ 06 Dec 25 11:03 UTC │
	│ service │ functional-196950 service hello-node --url --format={{.IP}}                                                                                               │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │                     │
	│ tunnel  │ functional-196950 tunnel --alsologtostderr                                                                                                                │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │                     │
	│ tunnel  │ functional-196950 tunnel --alsologtostderr                                                                                                                │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │                     │
	│ service │ functional-196950 service hello-node --url                                                                                                                │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │                     │
	│ tunnel  │ functional-196950 tunnel --alsologtostderr                                                                                                                │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │                     │
	│ addons  │ functional-196950 addons list                                                                                                                             │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │ 06 Dec 25 11:05 UTC │
	│ addons  │ functional-196950 addons list -o json                                                                                                                     │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │ 06 Dec 25 11:05 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:51:25
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:51:25.658528  405191 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:51:25.659862  405191 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:51:25.659873  405191 out.go:374] Setting ErrFile to fd 2...
	I1206 10:51:25.659879  405191 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:51:25.660272  405191 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 10:51:25.660784  405191 out.go:368] Setting JSON to false
	I1206 10:51:25.661671  405191 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9237,"bootTime":1765009049,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 10:51:25.661825  405191 start.go:143] virtualization:  
	I1206 10:51:25.665170  405191 out.go:179] * [functional-196950] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:51:25.668974  405191 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 10:51:25.669057  405191 notify.go:221] Checking for updates...
	I1206 10:51:25.674658  405191 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:51:25.677504  405191 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 10:51:25.680242  405191 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-362985/.minikube
	I1206 10:51:25.683061  405191 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:51:25.685807  405191 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:51:25.689056  405191 config.go:182] Loaded profile config "functional-196950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:51:25.689150  405191 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:51:25.719603  405191 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:51:25.719706  405191 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:51:25.776170  405191 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-06 10:51:25.766414658 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:51:25.776279  405191 docker.go:319] overlay module found
	I1206 10:51:25.779319  405191 out.go:179] * Using the docker driver based on existing profile
	I1206 10:51:25.782157  405191 start.go:309] selected driver: docker
	I1206 10:51:25.782168  405191 start.go:927] validating driver "docker" against &{Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:51:25.782268  405191 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:51:25.782379  405191 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:51:25.843232  405191 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-06 10:51:25.834027648 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:51:25.843742  405191 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 10:51:25.843762  405191 cni.go:84] Creating CNI manager for ""
	I1206 10:51:25.843817  405191 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:51:25.843868  405191 start.go:353] cluster config:
	{Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:51:25.846980  405191 out.go:179] * Starting "functional-196950" primary control-plane node in "functional-196950" cluster
	I1206 10:51:25.849840  405191 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 10:51:25.852721  405191 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:51:25.855512  405191 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 10:51:25.855549  405191 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1206 10:51:25.855557  405191 cache.go:65] Caching tarball of preloaded images
	I1206 10:51:25.855585  405191 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:51:25.855649  405191 preload.go:238] Found /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1206 10:51:25.855670  405191 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1206 10:51:25.855775  405191 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/config.json ...
	I1206 10:51:25.875281  405191 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 10:51:25.875292  405191 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 10:51:25.875312  405191 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:51:25.875342  405191 start.go:360] acquireMachinesLock for functional-196950: {Name:mkd2471f275d1d2a438cb4ce89f1d1521a0fb340 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:51:25.875462  405191 start.go:364] duration metric: took 100.145µs to acquireMachinesLock for "functional-196950"
	I1206 10:51:25.875483  405191 start.go:96] Skipping create...Using existing machine configuration
	I1206 10:51:25.875487  405191 fix.go:54] fixHost starting: 
	I1206 10:51:25.875763  405191 cli_runner.go:164] Run: docker container inspect functional-196950 --format={{.State.Status}}
	I1206 10:51:25.893454  405191 fix.go:112] recreateIfNeeded on functional-196950: state=Running err=<nil>
	W1206 10:51:25.893482  405191 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 10:51:25.896578  405191 out.go:252] * Updating the running docker "functional-196950" container ...
	I1206 10:51:25.896608  405191 machine.go:94] provisionDockerMachine start ...
	I1206 10:51:25.896697  405191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:51:25.913940  405191 main.go:143] libmachine: Using SSH client type: native
	I1206 10:51:25.914320  405191 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1206 10:51:25.914327  405191 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:51:26.075155  405191 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-196950
	
	I1206 10:51:26.075169  405191 ubuntu.go:182] provisioning hostname "functional-196950"
	I1206 10:51:26.075252  405191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:51:26.094744  405191 main.go:143] libmachine: Using SSH client type: native
	I1206 10:51:26.095070  405191 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1206 10:51:26.095080  405191 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-196950 && echo "functional-196950" | sudo tee /etc/hostname
	I1206 10:51:26.261114  405191 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-196950
	
	I1206 10:51:26.261197  405191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:51:26.279848  405191 main.go:143] libmachine: Using SSH client type: native
	I1206 10:51:26.280166  405191 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1206 10:51:26.280180  405191 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-196950' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-196950/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-196950' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:51:26.431933  405191 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:51:26.431953  405191 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-362985/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-362985/.minikube}
	I1206 10:51:26.431971  405191 ubuntu.go:190] setting up certificates
	I1206 10:51:26.431995  405191 provision.go:84] configureAuth start
	I1206 10:51:26.432056  405191 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-196950
	I1206 10:51:26.450343  405191 provision.go:143] copyHostCerts
	I1206 10:51:26.450415  405191 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem, removing ...
	I1206 10:51:26.450432  405191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem
	I1206 10:51:26.450505  405191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem (1082 bytes)
	I1206 10:51:26.450607  405191 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem, removing ...
	I1206 10:51:26.450611  405191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem
	I1206 10:51:26.450636  405191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem (1123 bytes)
	I1206 10:51:26.450689  405191 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem, removing ...
	I1206 10:51:26.450693  405191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem
	I1206 10:51:26.450714  405191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem (1679 bytes)
	I1206 10:51:26.450755  405191 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem org=jenkins.functional-196950 san=[127.0.0.1 192.168.49.2 functional-196950 localhost minikube]
	I1206 10:51:26.540911  405191 provision.go:177] copyRemoteCerts
	I1206 10:51:26.540967  405191 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:51:26.541011  405191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:51:26.559000  405191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:51:26.664415  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 10:51:26.682850  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 10:51:26.700635  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 10:51:26.720260  405191 provision.go:87] duration metric: took 288.251554ms to configureAuth
	I1206 10:51:26.720277  405191 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:51:26.720482  405191 config.go:182] Loaded profile config "functional-196950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:51:26.720577  405191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:51:26.740294  405191 main.go:143] libmachine: Using SSH client type: native
	I1206 10:51:26.740607  405191 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1206 10:51:26.740618  405191 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 10:51:27.107160  405191 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 10:51:27.107175  405191 machine.go:97] duration metric: took 1.210560762s to provisionDockerMachine
	I1206 10:51:27.107185  405191 start.go:293] postStartSetup for "functional-196950" (driver="docker")
	I1206 10:51:27.107196  405191 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:51:27.107253  405191 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:51:27.107294  405191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:51:27.129039  405191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:51:27.236148  405191 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:51:27.240016  405191 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:51:27.240036  405191 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:51:27.240047  405191 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-362985/.minikube/addons for local assets ...
	I1206 10:51:27.240125  405191 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-362985/.minikube/files for local assets ...
	I1206 10:51:27.240216  405191 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem -> 3648552.pem in /etc/ssl/certs
	I1206 10:51:27.240311  405191 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/test/nested/copy/364855/hosts -> hosts in /etc/test/nested/copy/364855
	I1206 10:51:27.240389  405191 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/364855
	I1206 10:51:27.248525  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem --> /etc/ssl/certs/3648552.pem (1708 bytes)
	I1206 10:51:27.267246  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/test/nested/copy/364855/hosts --> /etc/test/nested/copy/364855/hosts (40 bytes)
	I1206 10:51:27.285080  405191 start.go:296] duration metric: took 177.880099ms for postStartSetup
	I1206 10:51:27.285152  405191 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:51:27.285189  405191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:51:27.302563  405191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:51:27.404400  405191 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:51:27.408968  405191 fix.go:56] duration metric: took 1.533473357s for fixHost
	I1206 10:51:27.408984  405191 start.go:83] releasing machines lock for "functional-196950", held for 1.533513702s
	I1206 10:51:27.409052  405191 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-196950
	I1206 10:51:27.427444  405191 ssh_runner.go:195] Run: cat /version.json
	I1206 10:51:27.427475  405191 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:51:27.427488  405191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:51:27.427532  405191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:51:27.449136  405191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:51:27.450292  405191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:51:27.555364  405191 ssh_runner.go:195] Run: systemctl --version
	I1206 10:51:27.645936  405191 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 10:51:27.683240  405191 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 10:51:27.687562  405191 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:51:27.687626  405191 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:51:27.695460  405191 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 10:51:27.695474  405191 start.go:496] detecting cgroup driver to use...
	I1206 10:51:27.695505  405191 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:51:27.695551  405191 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 10:51:27.711018  405191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 10:51:27.724651  405191 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:51:27.724707  405191 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:51:27.740806  405191 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:51:27.754100  405191 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:51:27.883046  405191 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:51:28.013378  405191 docker.go:234] disabling docker service ...
	I1206 10:51:28.013440  405191 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:51:28.030310  405191 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:51:28.044424  405191 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:51:28.162200  405191 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:51:28.315775  405191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:51:28.333888  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:51:28.350625  405191 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 10:51:28.350700  405191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:51:28.360184  405191 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 10:51:28.360243  405191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:51:28.369224  405191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:51:28.378656  405191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:51:28.387862  405191 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:51:28.396244  405191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:51:28.405446  405191 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:51:28.414057  405191 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:51:28.423226  405191 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:51:28.430865  405191 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:51:28.438644  405191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:51:28.553737  405191 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 10:51:28.722710  405191 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 10:51:28.722782  405191 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 10:51:28.727796  405191 start.go:564] Will wait 60s for crictl version
	I1206 10:51:28.727854  405191 ssh_runner.go:195] Run: which crictl
	I1206 10:51:28.731603  405191 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:51:28.757634  405191 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 10:51:28.757708  405191 ssh_runner.go:195] Run: crio --version
	I1206 10:51:28.786864  405191 ssh_runner.go:195] Run: crio --version
	I1206 10:51:28.819624  405191 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1206 10:51:28.822438  405191 cli_runner.go:164] Run: docker network inspect functional-196950 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:51:28.838919  405191 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 10:51:28.845850  405191 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1206 10:51:28.848840  405191 kubeadm.go:884] updating cluster {Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:51:28.848980  405191 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 10:51:28.849059  405191 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:51:28.884770  405191 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:51:28.884782  405191 crio.go:433] Images already preloaded, skipping extraction
	I1206 10:51:28.884839  405191 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:51:28.911560  405191 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:51:28.911574  405191 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:51:28.911581  405191 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1206 10:51:28.911685  405191 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-196950 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:51:28.911771  405191 ssh_runner.go:195] Run: crio config
	I1206 10:51:28.966566  405191 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1206 10:51:28.966595  405191 cni.go:84] Creating CNI manager for ""
	I1206 10:51:28.966604  405191 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:51:28.966619  405191 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 10:51:28.966641  405191 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-196950 NodeName:functional-196950 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:51:28.966791  405191 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-196950"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:51:28.966870  405191 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 10:51:28.978798  405191 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:51:28.978872  405191 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:51:28.987304  405191 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1206 10:51:29.001847  405191 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 10:51:29.017577  405191 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1206 10:51:29.031751  405191 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:51:29.036513  405191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:51:29.155805  405191 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:51:29.722153  405191 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950 for IP: 192.168.49.2
	I1206 10:51:29.722163  405191 certs.go:195] generating shared ca certs ...
	I1206 10:51:29.722178  405191 certs.go:227] acquiring lock for ca certs: {Name:mke2ec61a37b6f3abbcbeb9abd23d6a19d011dd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:51:29.722312  405191 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.key
	I1206 10:51:29.722350  405191 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.key
	I1206 10:51:29.722357  405191 certs.go:257] generating profile certs ...
	I1206 10:51:29.722458  405191 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.key
	I1206 10:51:29.722506  405191 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.key.a77b39a6
	I1206 10:51:29.722550  405191 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.key
	I1206 10:51:29.722659  405191 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855.pem (1338 bytes)
	W1206 10:51:29.722686  405191 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855_empty.pem, impossibly tiny 0 bytes
	I1206 10:51:29.722693  405191 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 10:51:29.722721  405191 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem (1082 bytes)
	I1206 10:51:29.722747  405191 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:51:29.722776  405191 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem (1679 bytes)
	I1206 10:51:29.722816  405191 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem (1708 bytes)
	I1206 10:51:29.723422  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:51:29.745118  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 10:51:29.764772  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:51:29.783979  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 10:51:29.803249  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 10:51:29.821820  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 10:51:29.840052  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:51:29.858172  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 10:51:29.876447  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:51:29.894619  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855.pem --> /usr/share/ca-certificates/364855.pem (1338 bytes)
	I1206 10:51:29.912710  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem --> /usr/share/ca-certificates/3648552.pem (1708 bytes)
	I1206 10:51:29.930993  405191 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:51:29.944776  405191 ssh_runner.go:195] Run: openssl version
	I1206 10:51:29.951232  405191 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:51:29.958913  405191 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:51:29.966922  405191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:51:29.970672  405191 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:26 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:51:29.970730  405191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:51:30.016305  405191 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:51:30.031889  405191 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364855.pem
	I1206 10:51:30.048455  405191 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364855.pem /etc/ssl/certs/364855.pem
	I1206 10:51:30.063564  405191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364855.pem
	I1206 10:51:30.076207  405191 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:36 /usr/share/ca-certificates/364855.pem
	I1206 10:51:30.076271  405191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364855.pem
	I1206 10:51:30.128156  405191 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 10:51:30.136853  405191 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648552.pem
	I1206 10:51:30.146061  405191 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648552.pem /etc/ssl/certs/3648552.pem
	I1206 10:51:30.154785  405191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648552.pem
	I1206 10:51:30.159209  405191 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:36 /usr/share/ca-certificates/3648552.pem
	I1206 10:51:30.159296  405191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648552.pem
	I1206 10:51:30.202450  405191 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 10:51:30.210421  405191 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:51:30.214689  405191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 10:51:30.257294  405191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 10:51:30.301161  405191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 10:51:30.342552  405191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 10:51:30.384443  405191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 10:51:30.426153  405191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 10:51:30.467193  405191 kubeadm.go:401] StartCluster: {Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:51:30.467269  405191 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:51:30.467336  405191 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:51:30.505294  405191 cri.go:89] found id: ""
	I1206 10:51:30.505356  405191 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:51:30.514317  405191 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 10:51:30.514327  405191 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 10:51:30.514378  405191 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 10:51:30.522953  405191 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:51:30.523619  405191 kubeconfig.go:125] found "functional-196950" server: "https://192.168.49.2:8441"
	I1206 10:51:30.525284  405191 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 10:51:30.535655  405191 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-06 10:36:53.608460602 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-06 10:51:29.025529796 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1206 10:51:30.535667  405191 kubeadm.go:1161] stopping kube-system containers ...
	I1206 10:51:30.535679  405191 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1206 10:51:30.535750  405191 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:51:30.563289  405191 cri.go:89] found id: ""
	I1206 10:51:30.563367  405191 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 10:51:30.577669  405191 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:51:30.585599  405191 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec  6 10:40 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  6 10:40 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  6 10:40 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec  6 10:40 /etc/kubernetes/scheduler.conf
	
	I1206 10:51:30.585661  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:51:30.593607  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:51:30.601561  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:51:30.601615  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:51:30.609082  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:51:30.616706  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:51:30.616764  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:51:30.624576  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:51:30.632333  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:51:30.632396  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:51:30.640022  405191 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 10:51:30.648015  405191 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:51:30.694279  405191 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:51:31.789747  405191 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.095443049s)
	I1206 10:51:31.789807  405191 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:51:31.992373  405191 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:51:32.066243  405191 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:51:32.115098  405191 api_server.go:52] waiting for apiserver process to appear ...
	I1206 10:51:32.115193  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:32.616025  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:33.115328  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:33.615777  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:34.116234  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:34.616203  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:35.115628  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:35.616081  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:36.116020  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:36.616269  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:37.115484  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:37.615419  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:38.115405  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:38.615272  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:39.115398  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:39.615355  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:40.115498  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:40.615726  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:41.116068  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:41.615318  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:42.116188  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:42.615408  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:43.116174  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:43.616150  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:44.115863  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:44.616112  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:45.115433  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:45.615358  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:46.115254  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:46.615554  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:47.116219  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:47.615907  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:48.115484  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:48.615750  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:49.115717  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:49.615630  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:50.115975  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:50.615777  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:51.116004  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:51.615732  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:52.115255  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:52.616222  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:53.115944  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:53.616128  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:54.115370  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:54.616204  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:55.116093  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:55.616070  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:56.116312  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:56.616205  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:57.116056  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:57.616042  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:58.116102  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:58.616065  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:59.115989  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:59.615683  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:00.115492  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:00.616604  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:01.115972  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:01.615689  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:02.116000  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:02.615289  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:03.116299  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:03.615451  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:04.115353  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:04.615302  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:05.115836  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:05.616105  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:06.115987  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:06.615950  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:07.116145  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:07.615538  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:08.115408  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:08.616204  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:09.116054  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:09.615547  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:10.115395  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:10.616209  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:11.115978  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:11.616260  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:12.115320  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:12.616287  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:13.115459  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:13.615480  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:14.116026  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:14.615286  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:15.116132  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:15.615307  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:16.116269  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:16.616317  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:17.115290  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:17.615531  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:18.115402  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:18.615434  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:19.115328  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:19.615503  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:20.115398  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:20.616128  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:21.115363  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:21.615736  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:22.115418  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:22.616278  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:23.115418  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:23.616297  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:24.115428  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:24.615397  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:25.115674  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:25.615431  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:26.116295  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:26.615282  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:27.115737  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:27.615537  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:28.115556  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:28.615304  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:29.115439  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:29.615331  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:30.116125  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:30.615932  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:31.115423  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:31.616201  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:32.116069  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:32.116145  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:32.141390  405191 cri.go:89] found id: ""
	I1206 10:52:32.141404  405191 logs.go:282] 0 containers: []
	W1206 10:52:32.141411  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:32.141416  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:32.141473  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:32.166484  405191 cri.go:89] found id: ""
	I1206 10:52:32.166497  405191 logs.go:282] 0 containers: []
	W1206 10:52:32.166504  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:32.166509  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:32.166565  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:32.194996  405191 cri.go:89] found id: ""
	I1206 10:52:32.195009  405191 logs.go:282] 0 containers: []
	W1206 10:52:32.195016  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:32.195021  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:32.195076  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:32.221300  405191 cri.go:89] found id: ""
	I1206 10:52:32.221313  405191 logs.go:282] 0 containers: []
	W1206 10:52:32.221321  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:32.221326  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:32.221382  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:32.247157  405191 cri.go:89] found id: ""
	I1206 10:52:32.247171  405191 logs.go:282] 0 containers: []
	W1206 10:52:32.247178  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:32.247201  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:32.247261  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:32.272996  405191 cri.go:89] found id: ""
	I1206 10:52:32.273011  405191 logs.go:282] 0 containers: []
	W1206 10:52:32.273018  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:32.273023  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:32.273087  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:32.298872  405191 cri.go:89] found id: ""
	I1206 10:52:32.298885  405191 logs.go:282] 0 containers: []
	W1206 10:52:32.298892  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:32.298899  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:32.298909  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:32.365036  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:32.365056  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:32.380152  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:32.380168  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:32.448480  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:32.439513   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:32.440191   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:32.441917   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:32.442441   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:32.444184   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:32.439513   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:32.440191   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:32.441917   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:32.442441   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:32.444184   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:32.448508  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:32.448519  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:32.521363  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:32.521385  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:52:35.051557  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:35.061829  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:35.061887  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:35.090093  405191 cri.go:89] found id: ""
	I1206 10:52:35.090109  405191 logs.go:282] 0 containers: []
	W1206 10:52:35.090116  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:35.090123  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:35.090185  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:35.120692  405191 cri.go:89] found id: ""
	I1206 10:52:35.120706  405191 logs.go:282] 0 containers: []
	W1206 10:52:35.120713  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:35.120718  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:35.120781  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:35.150871  405191 cri.go:89] found id: ""
	I1206 10:52:35.150885  405191 logs.go:282] 0 containers: []
	W1206 10:52:35.150895  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:35.150901  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:35.150966  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:35.178176  405191 cri.go:89] found id: ""
	I1206 10:52:35.178189  405191 logs.go:282] 0 containers: []
	W1206 10:52:35.178196  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:35.178201  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:35.178259  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:35.203836  405191 cri.go:89] found id: ""
	I1206 10:52:35.203851  405191 logs.go:282] 0 containers: []
	W1206 10:52:35.203858  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:35.203864  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:35.203922  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:35.229838  405191 cri.go:89] found id: ""
	I1206 10:52:35.229852  405191 logs.go:282] 0 containers: []
	W1206 10:52:35.229860  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:35.229865  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:35.229923  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:35.255728  405191 cri.go:89] found id: ""
	I1206 10:52:35.255742  405191 logs.go:282] 0 containers: []
	W1206 10:52:35.255749  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:35.255763  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:35.255774  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:35.326293  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:35.326313  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:35.341587  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:35.341603  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:35.406128  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:35.396962   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:35.397407   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:35.399334   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:35.399842   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:35.401729   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:35.396962   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:35.397407   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:35.399334   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:35.399842   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:35.401729   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:35.406138  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:35.406148  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:35.477539  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:35.477561  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:52:38.012461  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:38.026662  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:38.026746  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:38.057501  405191 cri.go:89] found id: ""
	I1206 10:52:38.057514  405191 logs.go:282] 0 containers: []
	W1206 10:52:38.057522  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:38.057527  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:38.057597  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:38.087721  405191 cri.go:89] found id: ""
	I1206 10:52:38.087736  405191 logs.go:282] 0 containers: []
	W1206 10:52:38.087744  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:38.087750  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:38.087812  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:38.115539  405191 cri.go:89] found id: ""
	I1206 10:52:38.115553  405191 logs.go:282] 0 containers: []
	W1206 10:52:38.115560  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:38.115566  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:38.115624  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:38.140812  405191 cri.go:89] found id: ""
	I1206 10:52:38.140826  405191 logs.go:282] 0 containers: []
	W1206 10:52:38.140833  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:38.140838  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:38.140896  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:38.166576  405191 cri.go:89] found id: ""
	I1206 10:52:38.166590  405191 logs.go:282] 0 containers: []
	W1206 10:52:38.166597  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:38.166602  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:38.166662  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:38.191851  405191 cri.go:89] found id: ""
	I1206 10:52:38.191864  405191 logs.go:282] 0 containers: []
	W1206 10:52:38.191871  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:38.191876  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:38.191933  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:38.217461  405191 cri.go:89] found id: ""
	I1206 10:52:38.217475  405191 logs.go:282] 0 containers: []
	W1206 10:52:38.217482  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:38.217490  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:38.217502  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:38.232449  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:38.232465  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:38.295220  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:38.286931   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:38.287615   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:38.289268   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:38.289707   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:38.291283   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:38.286931   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:38.287615   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:38.289268   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:38.289707   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:38.291283   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:38.295242  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:38.295255  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:38.363789  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:38.363809  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:52:38.393298  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:38.393313  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:40.963508  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:40.975400  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:40.975471  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:41.012381  405191 cri.go:89] found id: ""
	I1206 10:52:41.012396  405191 logs.go:282] 0 containers: []
	W1206 10:52:41.012403  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:41.012409  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:41.012481  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:41.045820  405191 cri.go:89] found id: ""
	I1206 10:52:41.045833  405191 logs.go:282] 0 containers: []
	W1206 10:52:41.045840  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:41.045845  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:41.045905  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:41.072220  405191 cri.go:89] found id: ""
	I1206 10:52:41.072234  405191 logs.go:282] 0 containers: []
	W1206 10:52:41.072241  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:41.072246  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:41.072315  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:41.099263  405191 cri.go:89] found id: ""
	I1206 10:52:41.099289  405191 logs.go:282] 0 containers: []
	W1206 10:52:41.099297  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:41.099302  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:41.099400  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:41.125321  405191 cri.go:89] found id: ""
	I1206 10:52:41.125335  405191 logs.go:282] 0 containers: []
	W1206 10:52:41.125342  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:41.125347  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:41.125407  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:41.151976  405191 cri.go:89] found id: ""
	I1206 10:52:41.151991  405191 logs.go:282] 0 containers: []
	W1206 10:52:41.151998  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:41.152004  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:41.152071  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:41.182220  405191 cri.go:89] found id: ""
	I1206 10:52:41.182246  405191 logs.go:282] 0 containers: []
	W1206 10:52:41.182254  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:41.182262  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:41.182276  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:41.248526  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:41.239066   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:41.239904   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:41.241505   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:41.242002   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:41.243768   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:41.239066   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:41.239904   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:41.241505   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:41.242002   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:41.243768   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:41.248580  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:41.248592  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:41.318224  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:41.318245  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:52:41.351350  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:41.351366  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:41.419147  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:41.419175  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:43.934479  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:43.945219  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:43.945319  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:43.977434  405191 cri.go:89] found id: ""
	I1206 10:52:43.977447  405191 logs.go:282] 0 containers: []
	W1206 10:52:43.977455  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:43.977460  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:43.977521  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:44.023455  405191 cri.go:89] found id: ""
	I1206 10:52:44.023469  405191 logs.go:282] 0 containers: []
	W1206 10:52:44.023476  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:44.023481  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:44.023547  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:44.054515  405191 cri.go:89] found id: ""
	I1206 10:52:44.054528  405191 logs.go:282] 0 containers: []
	W1206 10:52:44.054535  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:44.054542  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:44.054606  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:44.081078  405191 cri.go:89] found id: ""
	I1206 10:52:44.081092  405191 logs.go:282] 0 containers: []
	W1206 10:52:44.081100  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:44.081105  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:44.081169  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:44.107423  405191 cri.go:89] found id: ""
	I1206 10:52:44.107437  405191 logs.go:282] 0 containers: []
	W1206 10:52:44.107451  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:44.107456  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:44.107514  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:44.134813  405191 cri.go:89] found id: ""
	I1206 10:52:44.134827  405191 logs.go:282] 0 containers: []
	W1206 10:52:44.134834  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:44.134839  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:44.134901  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:44.160796  405191 cri.go:89] found id: ""
	I1206 10:52:44.160816  405191 logs.go:282] 0 containers: []
	W1206 10:52:44.160824  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:44.160831  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:44.160842  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:52:44.190778  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:44.190796  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:44.257562  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:44.257581  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:44.272647  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:44.272663  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:44.338023  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:44.329392   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:44.330156   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:44.331823   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:44.332332   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:44.333956   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:44.329392   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:44.330156   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:44.331823   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:44.332332   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:44.333956   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:44.338033  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:44.338043  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:46.906964  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:46.917503  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:46.917559  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:46.949168  405191 cri.go:89] found id: ""
	I1206 10:52:46.949182  405191 logs.go:282] 0 containers: []
	W1206 10:52:46.949189  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:46.949194  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:46.949253  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:46.981111  405191 cri.go:89] found id: ""
	I1206 10:52:46.981124  405191 logs.go:282] 0 containers: []
	W1206 10:52:46.981131  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:46.981136  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:46.981196  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:47.022951  405191 cri.go:89] found id: ""
	I1206 10:52:47.022965  405191 logs.go:282] 0 containers: []
	W1206 10:52:47.022972  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:47.022977  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:47.023037  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:47.052856  405191 cri.go:89] found id: ""
	I1206 10:52:47.052870  405191 logs.go:282] 0 containers: []
	W1206 10:52:47.052886  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:47.052891  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:47.052967  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:47.083787  405191 cri.go:89] found id: ""
	I1206 10:52:47.083800  405191 logs.go:282] 0 containers: []
	W1206 10:52:47.083807  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:47.083813  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:47.083870  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:47.109033  405191 cri.go:89] found id: ""
	I1206 10:52:47.109046  405191 logs.go:282] 0 containers: []
	W1206 10:52:47.109054  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:47.109059  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:47.109115  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:47.139758  405191 cri.go:89] found id: ""
	I1206 10:52:47.139772  405191 logs.go:282] 0 containers: []
	W1206 10:52:47.139779  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:47.139788  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:47.139798  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:47.154866  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:47.154884  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:47.221813  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:47.213688   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:47.214230   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:47.215830   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:47.216327   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:47.217906   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:47.213688   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:47.214230   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:47.215830   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:47.216327   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:47.217906   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:47.221824  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:47.221835  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:47.290233  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:47.290253  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:52:47.321014  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:47.321036  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:49.890726  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:49.902627  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:49.902688  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:49.929201  405191 cri.go:89] found id: ""
	I1206 10:52:49.929215  405191 logs.go:282] 0 containers: []
	W1206 10:52:49.929224  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:49.929230  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:49.929290  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:49.956185  405191 cri.go:89] found id: ""
	I1206 10:52:49.956198  405191 logs.go:282] 0 containers: []
	W1206 10:52:49.956205  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:49.956210  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:49.956269  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:49.993314  405191 cri.go:89] found id: ""
	I1206 10:52:49.993329  405191 logs.go:282] 0 containers: []
	W1206 10:52:49.993336  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:49.993343  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:49.993403  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:50.037379  405191 cri.go:89] found id: ""
	I1206 10:52:50.037395  405191 logs.go:282] 0 containers: []
	W1206 10:52:50.037403  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:50.037409  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:50.037472  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:50.067336  405191 cri.go:89] found id: ""
	I1206 10:52:50.067351  405191 logs.go:282] 0 containers: []
	W1206 10:52:50.067358  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:50.067363  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:50.067469  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:50.094997  405191 cri.go:89] found id: ""
	I1206 10:52:50.095010  405191 logs.go:282] 0 containers: []
	W1206 10:52:50.095018  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:50.095023  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:50.095087  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:50.122233  405191 cri.go:89] found id: ""
	I1206 10:52:50.122247  405191 logs.go:282] 0 containers: []
	W1206 10:52:50.122254  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:50.122262  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:50.122274  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:50.137790  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:50.137811  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:50.201020  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:50.192768   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:50.193599   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:50.195170   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:50.195719   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:50.197320   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:50.192768   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:50.193599   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:50.195170   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:50.195719   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:50.197320   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:50.201031  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:50.201041  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:50.275122  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:50.275142  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:52:50.303756  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:50.303777  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:52.872285  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:52.882349  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:52.882406  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:52.911618  405191 cri.go:89] found id: ""
	I1206 10:52:52.911631  405191 logs.go:282] 0 containers: []
	W1206 10:52:52.911638  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:52.911644  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:52.911705  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:52.937062  405191 cri.go:89] found id: ""
	I1206 10:52:52.937077  405191 logs.go:282] 0 containers: []
	W1206 10:52:52.937084  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:52.937089  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:52.937149  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:52.963326  405191 cri.go:89] found id: ""
	I1206 10:52:52.963340  405191 logs.go:282] 0 containers: []
	W1206 10:52:52.963347  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:52.963352  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:52.963437  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:52.997061  405191 cri.go:89] found id: ""
	I1206 10:52:52.997074  405191 logs.go:282] 0 containers: []
	W1206 10:52:52.997081  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:52.997086  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:52.997149  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:53.035456  405191 cri.go:89] found id: ""
	I1206 10:52:53.035469  405191 logs.go:282] 0 containers: []
	W1206 10:52:53.035477  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:53.035483  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:53.035543  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:53.063687  405191 cri.go:89] found id: ""
	I1206 10:52:53.063700  405191 logs.go:282] 0 containers: []
	W1206 10:52:53.063707  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:53.063712  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:53.063770  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:53.089131  405191 cri.go:89] found id: ""
	I1206 10:52:53.089145  405191 logs.go:282] 0 containers: []
	W1206 10:52:53.089152  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:53.089161  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:53.089180  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:53.154130  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:53.145768   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:53.146202   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:53.147939   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:53.148440   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:53.150128   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:53.145768   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:53.146202   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:53.147939   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:53.148440   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:53.150128   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:53.154142  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:53.154153  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:53.226211  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:53.226231  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:52:53.255876  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:53.255893  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:53.328864  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:53.328884  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:55.844855  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:55.855173  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:55.855232  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:55.882003  405191 cri.go:89] found id: ""
	I1206 10:52:55.882016  405191 logs.go:282] 0 containers: []
	W1206 10:52:55.882037  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:55.882043  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:55.882102  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:55.906679  405191 cri.go:89] found id: ""
	I1206 10:52:55.906693  405191 logs.go:282] 0 containers: []
	W1206 10:52:55.906700  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:55.906705  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:55.906763  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:55.932742  405191 cri.go:89] found id: ""
	I1206 10:52:55.932756  405191 logs.go:282] 0 containers: []
	W1206 10:52:55.932763  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:55.932769  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:55.932830  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:55.959084  405191 cri.go:89] found id: ""
	I1206 10:52:55.959097  405191 logs.go:282] 0 containers: []
	W1206 10:52:55.959104  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:55.959109  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:55.959167  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:56.001438  405191 cri.go:89] found id: ""
	I1206 10:52:56.001453  405191 logs.go:282] 0 containers: []
	W1206 10:52:56.001461  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:56.001467  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:56.001540  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:56.039276  405191 cri.go:89] found id: ""
	I1206 10:52:56.039291  405191 logs.go:282] 0 containers: []
	W1206 10:52:56.039298  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:56.039304  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:56.039368  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:56.074083  405191 cri.go:89] found id: ""
	I1206 10:52:56.074097  405191 logs.go:282] 0 containers: []
	W1206 10:52:56.074104  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:56.074112  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:56.074124  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:56.148294  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:56.148320  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:56.163720  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:56.163740  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:56.231608  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:56.222271   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:56.222910   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:56.224621   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:56.225337   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:56.227055   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:56.222271   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:56.222910   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:56.224621   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:56.225337   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:56.227055   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:56.231633  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:56.231644  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:56.301348  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:56.301373  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:52:58.834132  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:58.844214  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:58.844271  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:58.871604  405191 cri.go:89] found id: ""
	I1206 10:52:58.871618  405191 logs.go:282] 0 containers: []
	W1206 10:52:58.871625  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:58.871630  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:58.871689  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:58.898243  405191 cri.go:89] found id: ""
	I1206 10:52:58.898257  405191 logs.go:282] 0 containers: []
	W1206 10:52:58.898264  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:58.898269  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:58.898325  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:58.921887  405191 cri.go:89] found id: ""
	I1206 10:52:58.921901  405191 logs.go:282] 0 containers: []
	W1206 10:52:58.921907  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:58.921913  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:58.921970  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:58.947546  405191 cri.go:89] found id: ""
	I1206 10:52:58.947563  405191 logs.go:282] 0 containers: []
	W1206 10:52:58.947570  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:58.947575  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:58.947645  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:58.976915  405191 cri.go:89] found id: ""
	I1206 10:52:58.976930  405191 logs.go:282] 0 containers: []
	W1206 10:52:58.976937  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:58.976942  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:58.977005  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:59.013936  405191 cri.go:89] found id: ""
	I1206 10:52:59.013949  405191 logs.go:282] 0 containers: []
	W1206 10:52:59.013956  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:59.013962  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:59.014020  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:59.044670  405191 cri.go:89] found id: ""
	I1206 10:52:59.044683  405191 logs.go:282] 0 containers: []
	W1206 10:52:59.044690  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:59.044698  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:59.044708  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:59.111552  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:59.111571  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:59.125917  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:59.125933  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:59.190341  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:59.182165   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:59.182776   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:59.184355   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:59.184805   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:59.186371   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:59.182165   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:59.182776   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:59.184355   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:59.184805   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:59.186371   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:59.190351  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:59.190362  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:59.258936  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:59.258957  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:01.790777  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:01.802470  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:01.802534  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:01.828331  405191 cri.go:89] found id: ""
	I1206 10:53:01.828345  405191 logs.go:282] 0 containers: []
	W1206 10:53:01.828352  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:01.828357  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:01.828415  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:01.853132  405191 cri.go:89] found id: ""
	I1206 10:53:01.853145  405191 logs.go:282] 0 containers: []
	W1206 10:53:01.853153  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:01.853158  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:01.853218  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:01.879034  405191 cri.go:89] found id: ""
	I1206 10:53:01.879048  405191 logs.go:282] 0 containers: []
	W1206 10:53:01.879055  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:01.879060  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:01.879119  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:01.905079  405191 cri.go:89] found id: ""
	I1206 10:53:01.905094  405191 logs.go:282] 0 containers: []
	W1206 10:53:01.905101  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:01.905106  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:01.905168  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:01.931029  405191 cri.go:89] found id: ""
	I1206 10:53:01.931043  405191 logs.go:282] 0 containers: []
	W1206 10:53:01.931050  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:01.931055  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:01.931115  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:01.958324  405191 cri.go:89] found id: ""
	I1206 10:53:01.958338  405191 logs.go:282] 0 containers: []
	W1206 10:53:01.958345  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:01.958351  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:01.958406  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:01.999570  405191 cri.go:89] found id: ""
	I1206 10:53:01.999583  405191 logs.go:282] 0 containers: []
	W1206 10:53:01.999590  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:01.999598  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:01.999613  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:02.075754  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:02.075775  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:02.091145  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:02.091168  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:02.166018  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:02.151211   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:02.151882   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:02.153563   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:02.154149   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:02.161209   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:02.151211   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:02.151882   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:02.153563   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:02.154149   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:02.161209   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:02.166029  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:02.166041  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:02.236832  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:02.236853  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:04.769770  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:04.780155  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:04.780230  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:04.805785  405191 cri.go:89] found id: ""
	I1206 10:53:04.805799  405191 logs.go:282] 0 containers: []
	W1206 10:53:04.805806  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:04.805811  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:04.805871  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:04.833423  405191 cri.go:89] found id: ""
	I1206 10:53:04.833445  405191 logs.go:282] 0 containers: []
	W1206 10:53:04.833452  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:04.833458  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:04.833523  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:04.859864  405191 cri.go:89] found id: ""
	I1206 10:53:04.859879  405191 logs.go:282] 0 containers: []
	W1206 10:53:04.859888  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:04.859895  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:04.859964  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:04.886417  405191 cri.go:89] found id: ""
	I1206 10:53:04.886431  405191 logs.go:282] 0 containers: []
	W1206 10:53:04.886437  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:04.886443  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:04.886503  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:04.912019  405191 cri.go:89] found id: ""
	I1206 10:53:04.912033  405191 logs.go:282] 0 containers: []
	W1206 10:53:04.912040  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:04.912044  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:04.912104  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:04.941901  405191 cri.go:89] found id: ""
	I1206 10:53:04.941915  405191 logs.go:282] 0 containers: []
	W1206 10:53:04.941922  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:04.941928  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:04.941990  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:04.967316  405191 cri.go:89] found id: ""
	I1206 10:53:04.967330  405191 logs.go:282] 0 containers: []
	W1206 10:53:04.967337  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:04.967344  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:04.967356  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:05.048268  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:05.048290  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:05.064282  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:05.064299  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:05.132111  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:05.123756   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:05.124563   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:05.126211   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:05.126545   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:05.128101   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:05.123756   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:05.124563   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:05.126211   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:05.126545   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:05.128101   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:05.132131  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:05.132142  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:05.202438  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:05.202460  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:07.731737  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:07.742255  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:07.742344  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:07.767645  405191 cri.go:89] found id: ""
	I1206 10:53:07.767659  405191 logs.go:282] 0 containers: []
	W1206 10:53:07.767666  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:07.767671  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:07.767730  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:07.793951  405191 cri.go:89] found id: ""
	I1206 10:53:07.793975  405191 logs.go:282] 0 containers: []
	W1206 10:53:07.793983  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:07.793989  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:07.794055  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:07.819683  405191 cri.go:89] found id: ""
	I1206 10:53:07.819699  405191 logs.go:282] 0 containers: []
	W1206 10:53:07.819705  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:07.819711  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:07.819784  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:07.851523  405191 cri.go:89] found id: ""
	I1206 10:53:07.851537  405191 logs.go:282] 0 containers: []
	W1206 10:53:07.851543  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:07.851549  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:07.851627  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:07.878807  405191 cri.go:89] found id: ""
	I1206 10:53:07.878831  405191 logs.go:282] 0 containers: []
	W1206 10:53:07.878838  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:07.878844  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:07.878915  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:07.911047  405191 cri.go:89] found id: ""
	I1206 10:53:07.911060  405191 logs.go:282] 0 containers: []
	W1206 10:53:07.911078  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:07.911084  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:07.911155  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:07.937042  405191 cri.go:89] found id: ""
	I1206 10:53:07.937064  405191 logs.go:282] 0 containers: []
	W1206 10:53:07.937072  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:07.937080  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:07.937091  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:08.004528  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:08.004551  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:08.026930  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:08.026947  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:08.109064  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:08.100555   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:08.101020   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:08.102569   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:08.102918   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:08.104386   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:08.100555   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:08.101020   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:08.102569   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:08.102918   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:08.104386   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:08.109086  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:08.109096  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:08.177486  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:08.177508  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:10.706543  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:10.717198  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:10.717262  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:10.743532  405191 cri.go:89] found id: ""
	I1206 10:53:10.743545  405191 logs.go:282] 0 containers: []
	W1206 10:53:10.743552  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:10.743557  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:10.743617  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:10.768882  405191 cri.go:89] found id: ""
	I1206 10:53:10.768897  405191 logs.go:282] 0 containers: []
	W1206 10:53:10.768903  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:10.768908  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:10.768966  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:10.798729  405191 cri.go:89] found id: ""
	I1206 10:53:10.798742  405191 logs.go:282] 0 containers: []
	W1206 10:53:10.798751  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:10.798756  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:10.798814  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:10.823956  405191 cri.go:89] found id: ""
	I1206 10:53:10.823971  405191 logs.go:282] 0 containers: []
	W1206 10:53:10.823978  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:10.823984  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:10.824054  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:10.849242  405191 cri.go:89] found id: ""
	I1206 10:53:10.849271  405191 logs.go:282] 0 containers: []
	W1206 10:53:10.849278  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:10.849283  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:10.849351  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:10.876058  405191 cri.go:89] found id: ""
	I1206 10:53:10.876071  405191 logs.go:282] 0 containers: []
	W1206 10:53:10.876078  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:10.876086  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:10.876145  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:10.901170  405191 cri.go:89] found id: ""
	I1206 10:53:10.901184  405191 logs.go:282] 0 containers: []
	W1206 10:53:10.901192  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:10.901199  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:10.901210  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:10.971362  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:10.971388  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:11.005981  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:11.006000  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:11.089894  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:11.089916  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:11.106328  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:11.106365  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:11.174633  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:11.166045   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:11.167001   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:11.168645   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:11.169014   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:11.170539   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:11.166045   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:11.167001   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:11.168645   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:11.169014   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:11.170539   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:13.674898  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:13.689619  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:13.689793  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:13.724853  405191 cri.go:89] found id: ""
	I1206 10:53:13.724867  405191 logs.go:282] 0 containers: []
	W1206 10:53:13.724874  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:13.724880  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:13.724939  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:13.751349  405191 cri.go:89] found id: ""
	I1206 10:53:13.751363  405191 logs.go:282] 0 containers: []
	W1206 10:53:13.751369  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:13.751402  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:13.751488  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:13.778380  405191 cri.go:89] found id: ""
	I1206 10:53:13.778395  405191 logs.go:282] 0 containers: []
	W1206 10:53:13.778402  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:13.778408  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:13.778474  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:13.806068  405191 cri.go:89] found id: ""
	I1206 10:53:13.806081  405191 logs.go:282] 0 containers: []
	W1206 10:53:13.806088  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:13.806093  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:13.806150  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:13.831347  405191 cri.go:89] found id: ""
	I1206 10:53:13.831360  405191 logs.go:282] 0 containers: []
	W1206 10:53:13.831367  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:13.831410  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:13.831494  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:13.856962  405191 cri.go:89] found id: ""
	I1206 10:53:13.856976  405191 logs.go:282] 0 containers: []
	W1206 10:53:13.856983  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:13.856994  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:13.857057  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:13.883227  405191 cri.go:89] found id: ""
	I1206 10:53:13.883241  405191 logs.go:282] 0 containers: []
	W1206 10:53:13.883248  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:13.883256  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:13.883268  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:13.912731  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:13.912749  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:13.981562  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:13.981581  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:13.997805  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:13.997822  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:14.076333  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:14.066553   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:14.067750   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:14.068525   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:14.070431   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:14.071166   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:14.066553   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:14.067750   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:14.068525   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:14.070431   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:14.071166   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:14.076343  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:14.076355  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:16.646007  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:16.656726  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:16.656822  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:16.682515  405191 cri.go:89] found id: ""
	I1206 10:53:16.682529  405191 logs.go:282] 0 containers: []
	W1206 10:53:16.682535  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:16.682541  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:16.682609  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:16.708327  405191 cri.go:89] found id: ""
	I1206 10:53:16.708341  405191 logs.go:282] 0 containers: []
	W1206 10:53:16.708359  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:16.708365  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:16.708433  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:16.744002  405191 cri.go:89] found id: ""
	I1206 10:53:16.744023  405191 logs.go:282] 0 containers: []
	W1206 10:53:16.744032  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:16.744037  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:16.744099  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:16.771487  405191 cri.go:89] found id: ""
	I1206 10:53:16.771501  405191 logs.go:282] 0 containers: []
	W1206 10:53:16.771509  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:16.771514  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:16.771594  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:16.799494  405191 cri.go:89] found id: ""
	I1206 10:53:16.799507  405191 logs.go:282] 0 containers: []
	W1206 10:53:16.799514  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:16.799520  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:16.799595  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:16.825114  405191 cri.go:89] found id: ""
	I1206 10:53:16.825128  405191 logs.go:282] 0 containers: []
	W1206 10:53:16.825135  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:16.825141  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:16.825204  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:16.851277  405191 cri.go:89] found id: ""
	I1206 10:53:16.851304  405191 logs.go:282] 0 containers: []
	W1206 10:53:16.851312  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:16.851319  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:16.851329  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:16.880918  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:16.880935  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:16.946617  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:16.946636  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:16.961739  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:16.961756  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:17.047880  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:17.038809   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:17.039588   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:17.041249   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:17.041748   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:17.043299   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:17.038809   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:17.039588   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:17.041249   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:17.041748   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:17.043299   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:17.047890  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:17.047901  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:19.616855  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:19.627228  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:19.627288  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:19.654067  405191 cri.go:89] found id: ""
	I1206 10:53:19.654081  405191 logs.go:282] 0 containers: []
	W1206 10:53:19.654088  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:19.654093  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:19.654166  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:19.679488  405191 cri.go:89] found id: ""
	I1206 10:53:19.679502  405191 logs.go:282] 0 containers: []
	W1206 10:53:19.679509  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:19.679515  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:19.679573  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:19.706620  405191 cri.go:89] found id: ""
	I1206 10:53:19.706635  405191 logs.go:282] 0 containers: []
	W1206 10:53:19.706642  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:19.706647  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:19.706706  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:19.734381  405191 cri.go:89] found id: ""
	I1206 10:53:19.734395  405191 logs.go:282] 0 containers: []
	W1206 10:53:19.734406  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:19.734412  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:19.734476  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:19.761415  405191 cri.go:89] found id: ""
	I1206 10:53:19.761429  405191 logs.go:282] 0 containers: []
	W1206 10:53:19.761436  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:19.761441  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:19.761502  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:19.787176  405191 cri.go:89] found id: ""
	I1206 10:53:19.787190  405191 logs.go:282] 0 containers: []
	W1206 10:53:19.787203  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:19.787209  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:19.787270  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:19.813067  405191 cri.go:89] found id: ""
	I1206 10:53:19.813081  405191 logs.go:282] 0 containers: []
	W1206 10:53:19.813088  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:19.813096  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:19.813105  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:19.878821  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:19.878840  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:19.894664  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:19.894680  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:19.965061  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:19.956101   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:19.957218   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:19.958973   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:19.959413   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:19.960938   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:19.956101   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:19.957218   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:19.958973   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:19.959413   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:19.960938   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:19.965101  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:19.965111  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:20.038434  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:20.038456  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:22.572942  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:22.583202  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:22.583273  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:22.608534  405191 cri.go:89] found id: ""
	I1206 10:53:22.608548  405191 logs.go:282] 0 containers: []
	W1206 10:53:22.608556  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:22.608561  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:22.608623  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:22.637655  405191 cri.go:89] found id: ""
	I1206 10:53:22.637673  405191 logs.go:282] 0 containers: []
	W1206 10:53:22.637680  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:22.637685  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:22.637748  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:22.666908  405191 cri.go:89] found id: ""
	I1206 10:53:22.666922  405191 logs.go:282] 0 containers: []
	W1206 10:53:22.666929  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:22.666935  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:22.666995  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:22.694611  405191 cri.go:89] found id: ""
	I1206 10:53:22.694625  405191 logs.go:282] 0 containers: []
	W1206 10:53:22.694633  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:22.694638  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:22.694705  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:22.720468  405191 cri.go:89] found id: ""
	I1206 10:53:22.720482  405191 logs.go:282] 0 containers: []
	W1206 10:53:22.720489  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:22.720494  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:22.720551  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:22.750061  405191 cri.go:89] found id: ""
	I1206 10:53:22.750075  405191 logs.go:282] 0 containers: []
	W1206 10:53:22.750082  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:22.750087  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:22.750148  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:22.778201  405191 cri.go:89] found id: ""
	I1206 10:53:22.778216  405191 logs.go:282] 0 containers: []
	W1206 10:53:22.778223  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:22.778230  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:22.778241  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:22.848689  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:22.848710  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:22.878893  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:22.878908  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:22.945043  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:22.945065  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:22.960966  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:22.960982  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:23.041735  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:23.033031   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:23.033838   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:23.035561   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:23.036147   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:23.037681   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:23.033031   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:23.033838   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:23.035561   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:23.036147   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:23.037681   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:25.543429  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:25.553845  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:25.553906  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:25.580411  405191 cri.go:89] found id: ""
	I1206 10:53:25.580427  405191 logs.go:282] 0 containers: []
	W1206 10:53:25.580434  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:25.580439  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:25.580498  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:25.610347  405191 cri.go:89] found id: ""
	I1206 10:53:25.610361  405191 logs.go:282] 0 containers: []
	W1206 10:53:25.610368  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:25.610373  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:25.610430  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:25.637376  405191 cri.go:89] found id: ""
	I1206 10:53:25.637390  405191 logs.go:282] 0 containers: []
	W1206 10:53:25.637398  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:25.637403  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:25.637463  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:25.666544  405191 cri.go:89] found id: ""
	I1206 10:53:25.666558  405191 logs.go:282] 0 containers: []
	W1206 10:53:25.666572  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:25.666577  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:25.666636  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:25.692777  405191 cri.go:89] found id: ""
	I1206 10:53:25.692791  405191 logs.go:282] 0 containers: []
	W1206 10:53:25.692798  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:25.692803  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:25.692865  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:25.721819  405191 cri.go:89] found id: ""
	I1206 10:53:25.721833  405191 logs.go:282] 0 containers: []
	W1206 10:53:25.721841  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:25.721845  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:25.721901  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:25.749420  405191 cri.go:89] found id: ""
	I1206 10:53:25.749435  405191 logs.go:282] 0 containers: []
	W1206 10:53:25.749442  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:25.749450  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:25.749461  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:25.817956  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:25.817979  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:25.847454  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:25.847480  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:25.913445  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:25.913464  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:25.928310  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:25.928326  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:26.010257  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:25.998143   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:25.999802   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:26.001851   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:26.002260   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:26.005429   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:25.998143   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:25.999802   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:26.001851   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:26.002260   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:26.005429   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:28.510540  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:28.521536  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:28.521597  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:28.549848  405191 cri.go:89] found id: ""
	I1206 10:53:28.549862  405191 logs.go:282] 0 containers: []
	W1206 10:53:28.549869  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:28.549880  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:28.549941  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:28.574916  405191 cri.go:89] found id: ""
	I1206 10:53:28.574929  405191 logs.go:282] 0 containers: []
	W1206 10:53:28.574937  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:28.574941  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:28.575001  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:28.603948  405191 cri.go:89] found id: ""
	I1206 10:53:28.603963  405191 logs.go:282] 0 containers: []
	W1206 10:53:28.603971  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:28.603976  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:28.604038  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:28.633100  405191 cri.go:89] found id: ""
	I1206 10:53:28.633114  405191 logs.go:282] 0 containers: []
	W1206 10:53:28.633121  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:28.633127  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:28.633186  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:28.658360  405191 cri.go:89] found id: ""
	I1206 10:53:28.658374  405191 logs.go:282] 0 containers: []
	W1206 10:53:28.658381  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:28.658386  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:28.658450  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:28.686919  405191 cri.go:89] found id: ""
	I1206 10:53:28.686933  405191 logs.go:282] 0 containers: []
	W1206 10:53:28.686949  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:28.686955  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:28.687012  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:28.713970  405191 cri.go:89] found id: ""
	I1206 10:53:28.713984  405191 logs.go:282] 0 containers: []
	W1206 10:53:28.713991  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:28.714001  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:28.714011  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:28.783354  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:28.783415  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:28.799765  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:28.799785  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:28.875190  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:28.865163   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:28.865941   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:28.867993   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:28.868552   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:28.870594   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:28.865163   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:28.865941   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:28.867993   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:28.868552   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:28.870594   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:28.875200  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:28.875211  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:28.947238  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:28.947258  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:31.487136  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:31.497608  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:31.497670  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:31.524319  405191 cri.go:89] found id: ""
	I1206 10:53:31.524333  405191 logs.go:282] 0 containers: []
	W1206 10:53:31.524341  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:31.524347  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:31.524409  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:31.550830  405191 cri.go:89] found id: ""
	I1206 10:53:31.550845  405191 logs.go:282] 0 containers: []
	W1206 10:53:31.550852  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:31.550857  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:31.550925  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:31.577502  405191 cri.go:89] found id: ""
	I1206 10:53:31.577516  405191 logs.go:282] 0 containers: []
	W1206 10:53:31.577523  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:31.577528  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:31.577587  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:31.604074  405191 cri.go:89] found id: ""
	I1206 10:53:31.604088  405191 logs.go:282] 0 containers: []
	W1206 10:53:31.604095  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:31.604100  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:31.604157  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:31.630962  405191 cri.go:89] found id: ""
	I1206 10:53:31.630976  405191 logs.go:282] 0 containers: []
	W1206 10:53:31.630984  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:31.630989  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:31.631053  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:31.656604  405191 cri.go:89] found id: ""
	I1206 10:53:31.656619  405191 logs.go:282] 0 containers: []
	W1206 10:53:31.656626  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:31.656632  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:31.656695  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:31.682731  405191 cri.go:89] found id: ""
	I1206 10:53:31.682745  405191 logs.go:282] 0 containers: []
	W1206 10:53:31.682752  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:31.682760  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:31.682771  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:31.715043  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:31.715059  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:31.780742  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:31.780762  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:31.795393  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:31.795410  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:31.863799  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:31.855344   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:31.856015   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:31.857739   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:31.858189   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:31.859847   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:31.855344   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:31.856015   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:31.857739   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:31.858189   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:31.859847   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:31.863809  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:31.863820  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:34.432706  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:34.442775  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:34.442837  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:34.468444  405191 cri.go:89] found id: ""
	I1206 10:53:34.468458  405191 logs.go:282] 0 containers: []
	W1206 10:53:34.468465  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:34.468471  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:34.468536  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:34.494328  405191 cri.go:89] found id: ""
	I1206 10:53:34.494343  405191 logs.go:282] 0 containers: []
	W1206 10:53:34.494350  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:34.494356  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:34.494418  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:34.527045  405191 cri.go:89] found id: ""
	I1206 10:53:34.527060  405191 logs.go:282] 0 containers: []
	W1206 10:53:34.527068  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:34.527076  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:34.527139  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:34.554315  405191 cri.go:89] found id: ""
	I1206 10:53:34.554328  405191 logs.go:282] 0 containers: []
	W1206 10:53:34.554335  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:34.554340  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:34.554408  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:34.579994  405191 cri.go:89] found id: ""
	I1206 10:53:34.580009  405191 logs.go:282] 0 containers: []
	W1206 10:53:34.580024  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:34.580030  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:34.580093  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:34.608896  405191 cri.go:89] found id: ""
	I1206 10:53:34.608910  405191 logs.go:282] 0 containers: []
	W1206 10:53:34.608917  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:34.608925  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:34.608983  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:34.638506  405191 cri.go:89] found id: ""
	I1206 10:53:34.638521  405191 logs.go:282] 0 containers: []
	W1206 10:53:34.638528  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:34.638536  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:34.638549  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:34.700281  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:34.691941   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:34.692724   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:34.693733   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:34.694312   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:34.696014   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:34.691941   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:34.692724   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:34.693733   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:34.694312   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:34.696014   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:34.700290  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:34.700302  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:34.773019  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:34.773040  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:34.803610  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:34.803628  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:34.870473  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:34.870498  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:37.386935  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:37.397529  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:37.397618  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:37.422529  405191 cri.go:89] found id: ""
	I1206 10:53:37.422543  405191 logs.go:282] 0 containers: []
	W1206 10:53:37.422550  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:37.422556  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:37.422613  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:37.447810  405191 cri.go:89] found id: ""
	I1206 10:53:37.447824  405191 logs.go:282] 0 containers: []
	W1206 10:53:37.447830  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:37.447836  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:37.447895  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:37.473775  405191 cri.go:89] found id: ""
	I1206 10:53:37.473794  405191 logs.go:282] 0 containers: []
	W1206 10:53:37.473801  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:37.473806  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:37.473862  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:37.499349  405191 cri.go:89] found id: ""
	I1206 10:53:37.499362  405191 logs.go:282] 0 containers: []
	W1206 10:53:37.499370  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:37.499400  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:37.499468  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:37.526194  405191 cri.go:89] found id: ""
	I1206 10:53:37.526208  405191 logs.go:282] 0 containers: []
	W1206 10:53:37.526216  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:37.526221  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:37.526286  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:37.552021  405191 cri.go:89] found id: ""
	I1206 10:53:37.552041  405191 logs.go:282] 0 containers: []
	W1206 10:53:37.552049  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:37.552054  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:37.552113  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:37.577455  405191 cri.go:89] found id: ""
	I1206 10:53:37.577469  405191 logs.go:282] 0 containers: []
	W1206 10:53:37.577476  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:37.577484  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:37.577495  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:37.605307  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:37.605324  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:37.674813  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:37.674836  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:37.689252  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:37.689268  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:37.751707  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:37.743090   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:37.743746   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:37.745542   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:37.746167   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:37.747937   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:37.743090   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:37.743746   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:37.745542   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:37.746167   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:37.747937   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:37.751719  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:37.751730  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:40.320654  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:40.331310  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:40.331372  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:40.357691  405191 cri.go:89] found id: ""
	I1206 10:53:40.357706  405191 logs.go:282] 0 containers: []
	W1206 10:53:40.357721  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:40.357726  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:40.357789  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:40.383818  405191 cri.go:89] found id: ""
	I1206 10:53:40.383833  405191 logs.go:282] 0 containers: []
	W1206 10:53:40.383841  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:40.383847  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:40.383904  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:40.412121  405191 cri.go:89] found id: ""
	I1206 10:53:40.412134  405191 logs.go:282] 0 containers: []
	W1206 10:53:40.412141  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:40.412146  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:40.412204  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:40.438527  405191 cri.go:89] found id: ""
	I1206 10:53:40.438542  405191 logs.go:282] 0 containers: []
	W1206 10:53:40.438549  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:40.438554  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:40.438616  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:40.465329  405191 cri.go:89] found id: ""
	I1206 10:53:40.465344  405191 logs.go:282] 0 containers: []
	W1206 10:53:40.465351  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:40.465356  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:40.465420  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:40.491939  405191 cri.go:89] found id: ""
	I1206 10:53:40.491952  405191 logs.go:282] 0 containers: []
	W1206 10:53:40.491960  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:40.491965  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:40.492029  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:40.516801  405191 cri.go:89] found id: ""
	I1206 10:53:40.516821  405191 logs.go:282] 0 containers: []
	W1206 10:53:40.516828  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:40.516836  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:40.516848  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:40.593042  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:40.593062  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:40.608966  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:40.608986  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:40.675818  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:40.665869   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:40.667834   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:40.668210   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:40.669803   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:40.670394   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:40.665869   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:40.667834   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:40.668210   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:40.669803   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:40.670394   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:40.675828  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:40.675841  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:40.744680  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:40.744702  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:43.275550  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:43.285722  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:43.285783  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:43.312235  405191 cri.go:89] found id: ""
	I1206 10:53:43.312249  405191 logs.go:282] 0 containers: []
	W1206 10:53:43.312262  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:43.312278  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:43.312337  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:43.338204  405191 cri.go:89] found id: ""
	I1206 10:53:43.338219  405191 logs.go:282] 0 containers: []
	W1206 10:53:43.338226  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:43.338249  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:43.338321  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:43.363434  405191 cri.go:89] found id: ""
	I1206 10:53:43.363455  405191 logs.go:282] 0 containers: []
	W1206 10:53:43.363463  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:43.363480  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:43.363562  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:43.390724  405191 cri.go:89] found id: ""
	I1206 10:53:43.390738  405191 logs.go:282] 0 containers: []
	W1206 10:53:43.390745  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:43.390750  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:43.390824  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:43.416427  405191 cri.go:89] found id: ""
	I1206 10:53:43.416442  405191 logs.go:282] 0 containers: []
	W1206 10:53:43.416449  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:43.416454  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:43.416511  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:43.446598  405191 cri.go:89] found id: ""
	I1206 10:53:43.446612  405191 logs.go:282] 0 containers: []
	W1206 10:53:43.446619  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:43.446625  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:43.446695  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:43.472759  405191 cri.go:89] found id: ""
	I1206 10:53:43.472773  405191 logs.go:282] 0 containers: []
	W1206 10:53:43.472779  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:43.472787  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:43.472797  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:43.538686  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:43.538706  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:43.553731  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:43.553746  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:43.618535  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:43.609715   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:43.610485   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:43.612195   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:43.612812   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:43.614536   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:43.609715   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:43.610485   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:43.612195   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:43.612812   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:43.614536   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:43.618556  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:43.618570  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:43.690132  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:43.690152  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:46.225047  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:46.236105  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:46.236179  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:46.269036  405191 cri.go:89] found id: ""
	I1206 10:53:46.269066  405191 logs.go:282] 0 containers: []
	W1206 10:53:46.269074  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:46.269079  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:46.269151  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:46.300616  405191 cri.go:89] found id: ""
	I1206 10:53:46.300631  405191 logs.go:282] 0 containers: []
	W1206 10:53:46.300639  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:46.300645  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:46.300707  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:46.330077  405191 cri.go:89] found id: ""
	I1206 10:53:46.330102  405191 logs.go:282] 0 containers: []
	W1206 10:53:46.330110  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:46.330115  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:46.330189  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:46.361893  405191 cri.go:89] found id: ""
	I1206 10:53:46.361908  405191 logs.go:282] 0 containers: []
	W1206 10:53:46.361915  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:46.361920  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:46.361991  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:46.387920  405191 cri.go:89] found id: ""
	I1206 10:53:46.387934  405191 logs.go:282] 0 containers: []
	W1206 10:53:46.387941  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:46.387947  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:46.388006  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:46.415440  405191 cri.go:89] found id: ""
	I1206 10:53:46.415463  405191 logs.go:282] 0 containers: []
	W1206 10:53:46.415470  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:46.415475  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:46.415534  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:46.442198  405191 cri.go:89] found id: ""
	I1206 10:53:46.442211  405191 logs.go:282] 0 containers: []
	W1206 10:53:46.442219  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:46.442226  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:46.442239  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:46.457274  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:46.457290  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:46.520346  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:46.512290   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:46.512824   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:46.514476   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:46.514946   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:46.516438   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:46.512290   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:46.512824   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:46.514476   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:46.514946   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:46.516438   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:46.520388  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:46.520399  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:46.595642  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:46.595673  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:46.626749  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:46.626769  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:49.193445  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:49.203743  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:49.203807  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:49.233557  405191 cri.go:89] found id: ""
	I1206 10:53:49.233571  405191 logs.go:282] 0 containers: []
	W1206 10:53:49.233578  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:49.233583  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:49.233643  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:49.265569  405191 cri.go:89] found id: ""
	I1206 10:53:49.265583  405191 logs.go:282] 0 containers: []
	W1206 10:53:49.265590  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:49.265595  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:49.265651  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:49.296146  405191 cri.go:89] found id: ""
	I1206 10:53:49.296159  405191 logs.go:282] 0 containers: []
	W1206 10:53:49.296166  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:49.296172  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:49.296232  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:49.321471  405191 cri.go:89] found id: ""
	I1206 10:53:49.321485  405191 logs.go:282] 0 containers: []
	W1206 10:53:49.321492  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:49.321498  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:49.321556  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:49.346537  405191 cri.go:89] found id: ""
	I1206 10:53:49.346551  405191 logs.go:282] 0 containers: []
	W1206 10:53:49.346571  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:49.346577  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:49.346693  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:49.372292  405191 cri.go:89] found id: ""
	I1206 10:53:49.372307  405191 logs.go:282] 0 containers: []
	W1206 10:53:49.372314  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:49.372320  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:49.372382  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:49.397395  405191 cri.go:89] found id: ""
	I1206 10:53:49.397408  405191 logs.go:282] 0 containers: []
	W1206 10:53:49.397415  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:49.397422  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:49.397432  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:49.464359  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:49.464378  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:49.479746  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:49.479762  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:49.542949  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:49.534167   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:49.534752   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:49.536598   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:49.537091   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:49.538580   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:49.534167   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:49.534752   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:49.536598   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:49.537091   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:49.538580   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:49.542959  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:49.542969  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:49.612749  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:49.612769  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:52.142276  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:52.152804  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:52.152867  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:52.179560  405191 cri.go:89] found id: ""
	I1206 10:53:52.179575  405191 logs.go:282] 0 containers: []
	W1206 10:53:52.179582  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:52.179587  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:52.179642  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:52.204827  405191 cri.go:89] found id: ""
	I1206 10:53:52.204842  405191 logs.go:282] 0 containers: []
	W1206 10:53:52.204849  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:52.204854  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:52.204917  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:52.250790  405191 cri.go:89] found id: ""
	I1206 10:53:52.250804  405191 logs.go:282] 0 containers: []
	W1206 10:53:52.250811  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:52.250816  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:52.250886  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:52.282140  405191 cri.go:89] found id: ""
	I1206 10:53:52.282153  405191 logs.go:282] 0 containers: []
	W1206 10:53:52.282161  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:52.282166  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:52.282225  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:52.314373  405191 cri.go:89] found id: ""
	I1206 10:53:52.314387  405191 logs.go:282] 0 containers: []
	W1206 10:53:52.314395  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:52.314400  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:52.314471  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:52.339037  405191 cri.go:89] found id: ""
	I1206 10:53:52.339051  405191 logs.go:282] 0 containers: []
	W1206 10:53:52.339058  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:52.339064  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:52.339124  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:52.366113  405191 cri.go:89] found id: ""
	I1206 10:53:52.366127  405191 logs.go:282] 0 containers: []
	W1206 10:53:52.366134  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:52.366142  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:52.366152  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:52.436368  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:52.436388  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:52.451468  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:52.451487  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:52.518739  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:52.509542   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:52.509966   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:52.511603   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:52.511955   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:52.513754   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:52.509542   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:52.509966   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:52.511603   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:52.511955   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:52.513754   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:52.518760  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:52.518777  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:52.593784  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:52.593805  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:55.124735  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:55.135510  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:55.135574  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:55.162613  405191 cri.go:89] found id: ""
	I1206 10:53:55.162626  405191 logs.go:282] 0 containers: []
	W1206 10:53:55.162633  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:55.162638  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:55.162703  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:55.189655  405191 cri.go:89] found id: ""
	I1206 10:53:55.189669  405191 logs.go:282] 0 containers: []
	W1206 10:53:55.189676  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:55.189682  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:55.189786  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:55.215289  405191 cri.go:89] found id: ""
	I1206 10:53:55.215303  405191 logs.go:282] 0 containers: []
	W1206 10:53:55.215310  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:55.215315  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:55.215402  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:55.247890  405191 cri.go:89] found id: ""
	I1206 10:53:55.247913  405191 logs.go:282] 0 containers: []
	W1206 10:53:55.247921  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:55.247926  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:55.247992  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:55.283368  405191 cri.go:89] found id: ""
	I1206 10:53:55.283409  405191 logs.go:282] 0 containers: []
	W1206 10:53:55.283416  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:55.283422  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:55.283516  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:55.310596  405191 cri.go:89] found id: ""
	I1206 10:53:55.310609  405191 logs.go:282] 0 containers: []
	W1206 10:53:55.310627  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:55.310632  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:55.310712  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:55.337361  405191 cri.go:89] found id: ""
	I1206 10:53:55.337374  405191 logs.go:282] 0 containers: []
	W1206 10:53:55.337381  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:55.337389  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:55.337399  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:55.404341  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:55.404361  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:55.419687  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:55.419705  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:55.485498  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:55.476614   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:55.477821   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:55.478840   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:55.479810   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:55.480435   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:55.476614   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:55.477821   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:55.478840   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:55.479810   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:55.480435   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:55.485509  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:55.485522  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:55.555911  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:55.555932  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:58.088179  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:58.099010  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:58.099069  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:58.124686  405191 cri.go:89] found id: ""
	I1206 10:53:58.124700  405191 logs.go:282] 0 containers: []
	W1206 10:53:58.124710  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:58.124716  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:58.124773  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:58.149717  405191 cri.go:89] found id: ""
	I1206 10:53:58.149730  405191 logs.go:282] 0 containers: []
	W1206 10:53:58.149738  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:58.149743  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:58.149800  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:58.177293  405191 cri.go:89] found id: ""
	I1206 10:53:58.177307  405191 logs.go:282] 0 containers: []
	W1206 10:53:58.177314  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:58.177319  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:58.177389  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:58.203540  405191 cri.go:89] found id: ""
	I1206 10:53:58.203554  405191 logs.go:282] 0 containers: []
	W1206 10:53:58.203562  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:58.203567  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:58.203632  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:58.237354  405191 cri.go:89] found id: ""
	I1206 10:53:58.237377  405191 logs.go:282] 0 containers: []
	W1206 10:53:58.237385  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:58.237390  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:58.237459  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:58.269725  405191 cri.go:89] found id: ""
	I1206 10:53:58.269739  405191 logs.go:282] 0 containers: []
	W1206 10:53:58.269746  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:58.269751  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:58.269821  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:58.297406  405191 cri.go:89] found id: ""
	I1206 10:53:58.297420  405191 logs.go:282] 0 containers: []
	W1206 10:53:58.297427  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:58.297435  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:58.297445  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:58.363296  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:58.363319  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:58.379154  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:58.379170  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:58.448306  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:58.438857   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:58.439654   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:58.441442   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:58.441790   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:58.443511   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:58.438857   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:58.439654   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:58.441442   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:58.441790   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:58.443511   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:58.448317  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:58.448331  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:58.518384  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:58.518408  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:01.052183  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:01.062404  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:01.062462  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:01.087509  405191 cri.go:89] found id: ""
	I1206 10:54:01.087523  405191 logs.go:282] 0 containers: []
	W1206 10:54:01.087530  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:01.087536  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:01.087598  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:01.113371  405191 cri.go:89] found id: ""
	I1206 10:54:01.113385  405191 logs.go:282] 0 containers: []
	W1206 10:54:01.113392  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:01.113397  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:01.113456  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:01.140194  405191 cri.go:89] found id: ""
	I1206 10:54:01.140208  405191 logs.go:282] 0 containers: []
	W1206 10:54:01.140214  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:01.140220  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:01.140282  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:01.166431  405191 cri.go:89] found id: ""
	I1206 10:54:01.166445  405191 logs.go:282] 0 containers: []
	W1206 10:54:01.166452  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:01.166460  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:01.166523  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:01.195742  405191 cri.go:89] found id: ""
	I1206 10:54:01.195756  405191 logs.go:282] 0 containers: []
	W1206 10:54:01.195764  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:01.195769  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:01.195835  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:01.228731  405191 cri.go:89] found id: ""
	I1206 10:54:01.228746  405191 logs.go:282] 0 containers: []
	W1206 10:54:01.228753  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:01.228759  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:01.228821  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:01.260175  405191 cri.go:89] found id: ""
	I1206 10:54:01.260189  405191 logs.go:282] 0 containers: []
	W1206 10:54:01.260196  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:01.260204  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:01.260214  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:01.337819  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:01.337839  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:01.353486  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:01.353502  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:01.423278  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:01.414904   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:01.415292   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:01.417033   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:01.417517   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:01.418780   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:01.414904   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:01.415292   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:01.417033   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:01.417517   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:01.418780   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:01.423288  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:01.423299  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:01.492536  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:01.492556  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:04.028526  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:04.039535  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:04.039600  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:04.069150  405191 cri.go:89] found id: ""
	I1206 10:54:04.069164  405191 logs.go:282] 0 containers: []
	W1206 10:54:04.069172  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:04.069177  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:04.069238  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:04.100343  405191 cri.go:89] found id: ""
	I1206 10:54:04.100357  405191 logs.go:282] 0 containers: []
	W1206 10:54:04.100364  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:04.100369  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:04.100431  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:04.127347  405191 cri.go:89] found id: ""
	I1206 10:54:04.127361  405191 logs.go:282] 0 containers: []
	W1206 10:54:04.127368  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:04.127395  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:04.127466  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:04.154542  405191 cri.go:89] found id: ""
	I1206 10:54:04.154557  405191 logs.go:282] 0 containers: []
	W1206 10:54:04.154564  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:04.154569  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:04.154628  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:04.181647  405191 cri.go:89] found id: ""
	I1206 10:54:04.181661  405191 logs.go:282] 0 containers: []
	W1206 10:54:04.181668  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:04.181676  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:04.181739  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:04.210872  405191 cri.go:89] found id: ""
	I1206 10:54:04.210886  405191 logs.go:282] 0 containers: []
	W1206 10:54:04.210893  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:04.210899  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:04.210962  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:04.246454  405191 cri.go:89] found id: ""
	I1206 10:54:04.246468  405191 logs.go:282] 0 containers: []
	W1206 10:54:04.246482  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:04.246490  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:04.246501  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:04.322848  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:04.322872  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:04.338928  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:04.338945  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:04.409905  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:04.400164   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:04.400961   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:04.402781   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:04.403461   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:04.404662   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:04.400164   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:04.400961   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:04.402781   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:04.403461   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:04.404662   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:04.409916  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:04.409928  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:04.480369  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:04.480389  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:07.012345  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:07.022891  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:07.022962  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:07.049835  405191 cri.go:89] found id: ""
	I1206 10:54:07.049849  405191 logs.go:282] 0 containers: []
	W1206 10:54:07.049856  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:07.049861  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:07.049925  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:07.076617  405191 cri.go:89] found id: ""
	I1206 10:54:07.076631  405191 logs.go:282] 0 containers: []
	W1206 10:54:07.076637  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:07.076643  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:07.076704  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:07.103202  405191 cri.go:89] found id: ""
	I1206 10:54:07.103216  405191 logs.go:282] 0 containers: []
	W1206 10:54:07.103223  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:07.103229  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:07.103288  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:07.129964  405191 cri.go:89] found id: ""
	I1206 10:54:07.129977  405191 logs.go:282] 0 containers: []
	W1206 10:54:07.129984  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:07.129989  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:07.130048  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:07.157459  405191 cri.go:89] found id: ""
	I1206 10:54:07.157473  405191 logs.go:282] 0 containers: []
	W1206 10:54:07.157480  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:07.157485  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:07.157551  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:07.183797  405191 cri.go:89] found id: ""
	I1206 10:54:07.183811  405191 logs.go:282] 0 containers: []
	W1206 10:54:07.183818  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:07.183823  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:07.183881  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:07.209675  405191 cri.go:89] found id: ""
	I1206 10:54:07.209689  405191 logs.go:282] 0 containers: []
	W1206 10:54:07.209697  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:07.209704  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:07.209715  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:07.228202  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:07.228225  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:07.312770  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:07.304201   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:07.304672   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:07.306492   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:07.307083   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:07.308768   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:07.304201   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:07.304672   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:07.306492   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:07.307083   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:07.308768   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:07.312782  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:07.312792  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:07.383254  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:07.383275  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:07.414045  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:07.414060  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:09.985551  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:09.995745  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:09.995806  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:10.030868  405191 cri.go:89] found id: ""
	I1206 10:54:10.030884  405191 logs.go:282] 0 containers: []
	W1206 10:54:10.030892  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:10.030898  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:10.030967  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:10.060505  405191 cri.go:89] found id: ""
	I1206 10:54:10.060520  405191 logs.go:282] 0 containers: []
	W1206 10:54:10.060527  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:10.060532  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:10.060596  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:10.087945  405191 cri.go:89] found id: ""
	I1206 10:54:10.087979  405191 logs.go:282] 0 containers: []
	W1206 10:54:10.087986  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:10.087992  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:10.088069  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:10.116434  405191 cri.go:89] found id: ""
	I1206 10:54:10.116448  405191 logs.go:282] 0 containers: []
	W1206 10:54:10.116455  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:10.116461  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:10.116523  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:10.144560  405191 cri.go:89] found id: ""
	I1206 10:54:10.144572  405191 logs.go:282] 0 containers: []
	W1206 10:54:10.144579  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:10.144584  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:10.144645  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:10.173019  405191 cri.go:89] found id: ""
	I1206 10:54:10.173033  405191 logs.go:282] 0 containers: []
	W1206 10:54:10.173040  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:10.173046  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:10.173105  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:10.200809  405191 cri.go:89] found id: ""
	I1206 10:54:10.200823  405191 logs.go:282] 0 containers: []
	W1206 10:54:10.200830  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:10.200837  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:10.200847  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:10.215623  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:10.215642  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:10.300302  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:10.291573   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:10.292121   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:10.293856   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:10.294451   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:10.295989   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:10.291573   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:10.292121   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:10.293856   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:10.294451   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:10.295989   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:10.300314  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:10.300325  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:10.369603  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:10.369624  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:10.402671  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:10.402687  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:12.968162  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:12.978411  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:12.978473  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:13.010579  405191 cri.go:89] found id: ""
	I1206 10:54:13.010593  405191 logs.go:282] 0 containers: []
	W1206 10:54:13.010601  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:13.010606  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:13.010669  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:13.037103  405191 cri.go:89] found id: ""
	I1206 10:54:13.037118  405191 logs.go:282] 0 containers: []
	W1206 10:54:13.037125  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:13.037131  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:13.037199  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:13.063109  405191 cri.go:89] found id: ""
	I1206 10:54:13.063124  405191 logs.go:282] 0 containers: []
	W1206 10:54:13.063131  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:13.063136  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:13.063195  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:13.088780  405191 cri.go:89] found id: ""
	I1206 10:54:13.088794  405191 logs.go:282] 0 containers: []
	W1206 10:54:13.088801  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:13.088806  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:13.088868  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:13.114682  405191 cri.go:89] found id: ""
	I1206 10:54:13.114696  405191 logs.go:282] 0 containers: []
	W1206 10:54:13.114703  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:13.114708  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:13.114952  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:13.141850  405191 cri.go:89] found id: ""
	I1206 10:54:13.141866  405191 logs.go:282] 0 containers: []
	W1206 10:54:13.141873  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:13.141880  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:13.141945  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:13.167958  405191 cri.go:89] found id: ""
	I1206 10:54:13.167975  405191 logs.go:282] 0 containers: []
	W1206 10:54:13.167982  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:13.167990  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:13.168002  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:13.237314  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:13.237335  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:13.254137  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:13.254164  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:13.322226  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:13.313022   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:13.313640   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:13.315145   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:13.315780   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:13.317512   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:13.313022   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:13.313640   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:13.315145   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:13.315780   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:13.317512   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:13.322237  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:13.322248  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:13.394938  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:13.394958  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:15.923162  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:15.933287  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:15.933346  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:15.958679  405191 cri.go:89] found id: ""
	I1206 10:54:15.958694  405191 logs.go:282] 0 containers: []
	W1206 10:54:15.958701  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:15.958706  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:15.958768  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:15.986252  405191 cri.go:89] found id: ""
	I1206 10:54:15.986267  405191 logs.go:282] 0 containers: []
	W1206 10:54:15.986274  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:15.986279  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:15.986339  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:16.015947  405191 cri.go:89] found id: ""
	I1206 10:54:16.015961  405191 logs.go:282] 0 containers: []
	W1206 10:54:16.015968  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:16.015973  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:16.016038  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:16.046583  405191 cri.go:89] found id: ""
	I1206 10:54:16.046597  405191 logs.go:282] 0 containers: []
	W1206 10:54:16.046604  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:16.046609  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:16.046673  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:16.073401  405191 cri.go:89] found id: ""
	I1206 10:54:16.073415  405191 logs.go:282] 0 containers: []
	W1206 10:54:16.073422  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:16.073428  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:16.073489  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:16.099301  405191 cri.go:89] found id: ""
	I1206 10:54:16.099315  405191 logs.go:282] 0 containers: []
	W1206 10:54:16.099321  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:16.099327  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:16.099409  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:16.132045  405191 cri.go:89] found id: ""
	I1206 10:54:16.132060  405191 logs.go:282] 0 containers: []
	W1206 10:54:16.132067  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:16.132075  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:16.132086  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:16.201949  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:16.191660   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:16.193954   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:16.194695   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:16.196370   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:16.196866   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:16.191660   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:16.193954   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:16.194695   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:16.196370   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:16.196866   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:16.201962  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:16.201972  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:16.277750  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:16.277769  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:16.311130  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:16.311148  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:16.377771  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:16.377793  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:18.893108  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:18.903283  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:18.903345  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:18.927861  405191 cri.go:89] found id: ""
	I1206 10:54:18.927875  405191 logs.go:282] 0 containers: []
	W1206 10:54:18.927882  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:18.927887  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:18.927945  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:18.953460  405191 cri.go:89] found id: ""
	I1206 10:54:18.953474  405191 logs.go:282] 0 containers: []
	W1206 10:54:18.953482  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:18.953486  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:18.953563  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:18.981063  405191 cri.go:89] found id: ""
	I1206 10:54:18.981077  405191 logs.go:282] 0 containers: []
	W1206 10:54:18.981088  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:18.981093  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:18.981154  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:19.011134  405191 cri.go:89] found id: ""
	I1206 10:54:19.011148  405191 logs.go:282] 0 containers: []
	W1206 10:54:19.011156  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:19.011161  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:19.011221  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:19.037866  405191 cri.go:89] found id: ""
	I1206 10:54:19.037889  405191 logs.go:282] 0 containers: []
	W1206 10:54:19.037895  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:19.037901  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:19.037972  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:19.067672  405191 cri.go:89] found id: ""
	I1206 10:54:19.067685  405191 logs.go:282] 0 containers: []
	W1206 10:54:19.067692  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:19.067697  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:19.067753  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:19.092891  405191 cri.go:89] found id: ""
	I1206 10:54:19.092906  405191 logs.go:282] 0 containers: []
	W1206 10:54:19.092913  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:19.092921  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:19.092933  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:19.158186  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:19.149512   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:19.150168   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:19.151831   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:19.152364   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:19.154131   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:19.149512   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:19.150168   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:19.151831   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:19.152364   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:19.154131   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:19.158196  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:19.158209  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:19.231681  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:19.231701  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:19.267680  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:19.267704  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:19.341777  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:19.341796  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:21.856895  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:21.867600  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:21.867659  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:21.899562  405191 cri.go:89] found id: ""
	I1206 10:54:21.899576  405191 logs.go:282] 0 containers: []
	W1206 10:54:21.899583  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:21.899589  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:21.899647  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:21.924433  405191 cri.go:89] found id: ""
	I1206 10:54:21.924446  405191 logs.go:282] 0 containers: []
	W1206 10:54:21.924454  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:21.924459  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:21.924517  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:21.949461  405191 cri.go:89] found id: ""
	I1206 10:54:21.949476  405191 logs.go:282] 0 containers: []
	W1206 10:54:21.949482  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:21.949493  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:21.949550  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:21.976373  405191 cri.go:89] found id: ""
	I1206 10:54:21.976388  405191 logs.go:282] 0 containers: []
	W1206 10:54:21.976396  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:21.976401  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:21.976457  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:22.025051  405191 cri.go:89] found id: ""
	I1206 10:54:22.025074  405191 logs.go:282] 0 containers: []
	W1206 10:54:22.025095  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:22.025101  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:22.025214  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:22.054790  405191 cri.go:89] found id: ""
	I1206 10:54:22.054804  405191 logs.go:282] 0 containers: []
	W1206 10:54:22.054811  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:22.054817  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:22.054873  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:22.081220  405191 cri.go:89] found id: ""
	I1206 10:54:22.081235  405191 logs.go:282] 0 containers: []
	W1206 10:54:22.081242  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:22.081251  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:22.081262  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:22.147339  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:22.147359  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:22.162252  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:22.162268  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:22.233807  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:22.219327   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:22.220102   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:22.225452   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:22.227256   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:22.228864   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:22.219327   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:22.220102   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:22.225452   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:22.227256   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:22.228864   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:22.233819  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:22.233838  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:22.312101  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:22.312123  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:24.852672  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:24.863210  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:24.863271  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:24.889673  405191 cri.go:89] found id: ""
	I1206 10:54:24.889687  405191 logs.go:282] 0 containers: []
	W1206 10:54:24.889695  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:24.889700  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:24.889758  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:24.920816  405191 cri.go:89] found id: ""
	I1206 10:54:24.920830  405191 logs.go:282] 0 containers: []
	W1206 10:54:24.920837  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:24.920842  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:24.920900  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:24.945958  405191 cri.go:89] found id: ""
	I1206 10:54:24.945972  405191 logs.go:282] 0 containers: []
	W1206 10:54:24.945980  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:24.945985  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:24.946046  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:24.970886  405191 cri.go:89] found id: ""
	I1206 10:54:24.970900  405191 logs.go:282] 0 containers: []
	W1206 10:54:24.970907  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:24.970912  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:24.970970  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:25.000298  405191 cri.go:89] found id: ""
	I1206 10:54:25.000315  405191 logs.go:282] 0 containers: []
	W1206 10:54:25.000323  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:25.000329  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:25.000399  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:25.033867  405191 cri.go:89] found id: ""
	I1206 10:54:25.033882  405191 logs.go:282] 0 containers: []
	W1206 10:54:25.033890  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:25.033895  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:25.033960  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:25.060149  405191 cri.go:89] found id: ""
	I1206 10:54:25.060162  405191 logs.go:282] 0 containers: []
	W1206 10:54:25.060169  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:25.060177  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:25.060188  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:25.128734  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:25.120144   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:25.120771   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:25.122547   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:25.123145   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:25.124861   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:25.120144   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:25.120771   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:25.122547   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:25.123145   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:25.124861   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:25.128746  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:25.128757  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:25.198421  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:25.198443  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:25.239321  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:25.239341  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:25.316857  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:25.316878  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:27.833465  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:27.844470  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:27.844528  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:27.870606  405191 cri.go:89] found id: ""
	I1206 10:54:27.870621  405191 logs.go:282] 0 containers: []
	W1206 10:54:27.870628  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:27.870633  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:27.870693  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:27.894893  405191 cri.go:89] found id: ""
	I1206 10:54:27.894906  405191 logs.go:282] 0 containers: []
	W1206 10:54:27.894913  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:27.894918  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:27.894973  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:27.920116  405191 cri.go:89] found id: ""
	I1206 10:54:27.920129  405191 logs.go:282] 0 containers: []
	W1206 10:54:27.920136  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:27.920142  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:27.920201  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:27.946774  405191 cri.go:89] found id: ""
	I1206 10:54:27.946788  405191 logs.go:282] 0 containers: []
	W1206 10:54:27.946798  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:27.946806  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:27.946869  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:27.973164  405191 cri.go:89] found id: ""
	I1206 10:54:27.973178  405191 logs.go:282] 0 containers: []
	W1206 10:54:27.973185  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:27.973190  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:27.973247  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:28.005225  405191 cri.go:89] found id: ""
	I1206 10:54:28.005240  405191 logs.go:282] 0 containers: []
	W1206 10:54:28.005248  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:28.005255  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:28.005329  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:28.034341  405191 cri.go:89] found id: ""
	I1206 10:54:28.034355  405191 logs.go:282] 0 containers: []
	W1206 10:54:28.034362  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:28.034370  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:28.034381  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:28.107547  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:28.107567  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:28.136561  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:28.136578  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:28.206187  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:28.206206  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:28.224556  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:28.224580  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:28.311110  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:28.302520   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:28.303509   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:28.305089   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:28.305582   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:28.307158   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:28.302520   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:28.303509   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:28.305089   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:28.305582   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:28.307158   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:30.811550  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:30.821711  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:30.821769  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:30.850956  405191 cri.go:89] found id: ""
	I1206 10:54:30.850970  405191 logs.go:282] 0 containers: []
	W1206 10:54:30.850979  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:30.850984  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:30.851045  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:30.876542  405191 cri.go:89] found id: ""
	I1206 10:54:30.876558  405191 logs.go:282] 0 containers: []
	W1206 10:54:30.876565  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:30.876571  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:30.876630  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:30.902552  405191 cri.go:89] found id: ""
	I1206 10:54:30.902566  405191 logs.go:282] 0 containers: []
	W1206 10:54:30.902573  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:30.902578  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:30.902635  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:30.928737  405191 cri.go:89] found id: ""
	I1206 10:54:30.928751  405191 logs.go:282] 0 containers: []
	W1206 10:54:30.928758  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:30.928764  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:30.928829  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:30.954309  405191 cri.go:89] found id: ""
	I1206 10:54:30.954323  405191 logs.go:282] 0 containers: []
	W1206 10:54:30.954330  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:30.954335  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:30.954394  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:30.980239  405191 cri.go:89] found id: ""
	I1206 10:54:30.980251  405191 logs.go:282] 0 containers: []
	W1206 10:54:30.980258  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:30.980263  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:30.980319  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:31.010962  405191 cri.go:89] found id: ""
	I1206 10:54:31.010977  405191 logs.go:282] 0 containers: []
	W1206 10:54:31.010985  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:31.010994  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:31.011006  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:31.078259  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:31.069995   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:31.070621   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:31.072176   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:31.072646   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:31.074155   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:31.069995   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:31.070621   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:31.072176   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:31.072646   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:31.074155   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:31.078270  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:31.078282  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:31.147428  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:31.147455  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:31.181028  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:31.181045  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:31.253555  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:31.253574  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:33.770610  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:33.781236  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:33.781299  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:33.806547  405191 cri.go:89] found id: ""
	I1206 10:54:33.806561  405191 logs.go:282] 0 containers: []
	W1206 10:54:33.806568  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:33.806574  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:33.806632  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:33.832359  405191 cri.go:89] found id: ""
	I1206 10:54:33.832371  405191 logs.go:282] 0 containers: []
	W1206 10:54:33.832379  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:33.832383  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:33.832442  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:33.857194  405191 cri.go:89] found id: ""
	I1206 10:54:33.857207  405191 logs.go:282] 0 containers: []
	W1206 10:54:33.857214  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:33.857219  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:33.857280  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:33.886113  405191 cri.go:89] found id: ""
	I1206 10:54:33.886126  405191 logs.go:282] 0 containers: []
	W1206 10:54:33.886133  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:33.886138  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:33.886194  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:33.914351  405191 cri.go:89] found id: ""
	I1206 10:54:33.914364  405191 logs.go:282] 0 containers: []
	W1206 10:54:33.914371  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:33.914376  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:33.914438  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:33.939584  405191 cri.go:89] found id: ""
	I1206 10:54:33.939598  405191 logs.go:282] 0 containers: []
	W1206 10:54:33.939605  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:33.939611  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:33.939683  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:33.965467  405191 cri.go:89] found id: ""
	I1206 10:54:33.965481  405191 logs.go:282] 0 containers: []
	W1206 10:54:33.965488  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:33.965496  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:33.965506  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:34.034434  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:34.034456  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:34.068244  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:34.068263  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:34.136528  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:34.136548  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:34.151695  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:34.151713  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:34.237655  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:34.227619   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:34.228750   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:34.231547   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:34.232096   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:34.233598   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:34.227619   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:34.228750   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:34.231547   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:34.232096   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:34.233598   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:36.737997  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:36.748632  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:36.748739  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:36.774541  405191 cri.go:89] found id: ""
	I1206 10:54:36.774554  405191 logs.go:282] 0 containers: []
	W1206 10:54:36.774563  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:36.774568  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:36.774628  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:36.804563  405191 cri.go:89] found id: ""
	I1206 10:54:36.804577  405191 logs.go:282] 0 containers: []
	W1206 10:54:36.804585  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:36.804590  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:36.804649  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:36.829295  405191 cri.go:89] found id: ""
	I1206 10:54:36.829309  405191 logs.go:282] 0 containers: []
	W1206 10:54:36.829316  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:36.829322  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:36.829384  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:36.854740  405191 cri.go:89] found id: ""
	I1206 10:54:36.854754  405191 logs.go:282] 0 containers: []
	W1206 10:54:36.854761  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:36.854767  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:36.854827  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:36.879535  405191 cri.go:89] found id: ""
	I1206 10:54:36.879548  405191 logs.go:282] 0 containers: []
	W1206 10:54:36.879555  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:36.879560  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:36.879621  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:36.908804  405191 cri.go:89] found id: ""
	I1206 10:54:36.908818  405191 logs.go:282] 0 containers: []
	W1206 10:54:36.908826  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:36.908831  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:36.908891  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:36.935290  405191 cri.go:89] found id: ""
	I1206 10:54:36.935312  405191 logs.go:282] 0 containers: []
	W1206 10:54:36.935320  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:36.935328  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:36.935338  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:37.005221  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:37.005253  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:37.023044  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:37.023070  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:37.090033  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:37.082290   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:37.082864   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:37.084384   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:37.084721   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:37.086198   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:37.082290   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:37.082864   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:37.084384   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:37.084721   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:37.086198   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:37.090044  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:37.090055  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:37.158891  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:37.158911  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:39.688451  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:39.698958  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:39.699020  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:39.725003  405191 cri.go:89] found id: ""
	I1206 10:54:39.725017  405191 logs.go:282] 0 containers: []
	W1206 10:54:39.725024  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:39.725029  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:39.725086  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:39.750186  405191 cri.go:89] found id: ""
	I1206 10:54:39.750208  405191 logs.go:282] 0 containers: []
	W1206 10:54:39.750215  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:39.750221  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:39.750286  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:39.777512  405191 cri.go:89] found id: ""
	I1206 10:54:39.777527  405191 logs.go:282] 0 containers: []
	W1206 10:54:39.777534  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:39.777539  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:39.777598  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:39.805960  405191 cri.go:89] found id: ""
	I1206 10:54:39.805974  405191 logs.go:282] 0 containers: []
	W1206 10:54:39.805981  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:39.805987  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:39.806048  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:39.832070  405191 cri.go:89] found id: ""
	I1206 10:54:39.832086  405191 logs.go:282] 0 containers: []
	W1206 10:54:39.832093  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:39.832099  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:39.832162  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:39.856950  405191 cri.go:89] found id: ""
	I1206 10:54:39.856964  405191 logs.go:282] 0 containers: []
	W1206 10:54:39.856970  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:39.856976  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:39.857034  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:39.882830  405191 cri.go:89] found id: ""
	I1206 10:54:39.882844  405191 logs.go:282] 0 containers: []
	W1206 10:54:39.882851  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:39.882859  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:39.882869  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:39.948996  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:39.949016  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:39.964250  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:39.964266  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:40.040200  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:40.026040   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:40.026898   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:40.028891   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:40.029963   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:40.030727   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:40.026040   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:40.026898   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:40.028891   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:40.029963   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:40.030727   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:40.040211  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:40.040222  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:40.112805  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:40.112828  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:42.645898  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:42.656339  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:42.656399  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:42.681441  405191 cri.go:89] found id: ""
	I1206 10:54:42.681456  405191 logs.go:282] 0 containers: []
	W1206 10:54:42.681462  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:42.681468  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:42.681529  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:42.706692  405191 cri.go:89] found id: ""
	I1206 10:54:42.706706  405191 logs.go:282] 0 containers: []
	W1206 10:54:42.706713  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:42.706718  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:42.706781  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:42.734049  405191 cri.go:89] found id: ""
	I1206 10:54:42.734063  405191 logs.go:282] 0 containers: []
	W1206 10:54:42.734070  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:42.734075  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:42.734136  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:42.759095  405191 cri.go:89] found id: ""
	I1206 10:54:42.759115  405191 logs.go:282] 0 containers: []
	W1206 10:54:42.759123  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:42.759128  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:42.759190  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:42.786861  405191 cri.go:89] found id: ""
	I1206 10:54:42.786875  405191 logs.go:282] 0 containers: []
	W1206 10:54:42.786882  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:42.786887  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:42.786949  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:42.817648  405191 cri.go:89] found id: ""
	I1206 10:54:42.817663  405191 logs.go:282] 0 containers: []
	W1206 10:54:42.817670  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:42.817675  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:42.817738  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:42.844223  405191 cri.go:89] found id: ""
	I1206 10:54:42.844245  405191 logs.go:282] 0 containers: []
	W1206 10:54:42.844253  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:42.844261  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:42.844278  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:42.914866  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:42.904424   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:42.904903   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:42.907237   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:42.908578   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:42.909360   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:42.904424   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:42.904903   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:42.907237   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:42.908578   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:42.909360   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:42.914877  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:42.914888  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:42.987160  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:42.987181  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:43.017513  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:43.017529  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:43.084573  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:43.084595  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:45.600685  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:45.611239  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:45.611299  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:45.635510  405191 cri.go:89] found id: ""
	I1206 10:54:45.635525  405191 logs.go:282] 0 containers: []
	W1206 10:54:45.635532  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:45.635538  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:45.635604  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:45.664995  405191 cri.go:89] found id: ""
	I1206 10:54:45.665008  405191 logs.go:282] 0 containers: []
	W1206 10:54:45.665015  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:45.665020  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:45.665077  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:45.691036  405191 cri.go:89] found id: ""
	I1206 10:54:45.691050  405191 logs.go:282] 0 containers: []
	W1206 10:54:45.691057  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:45.691062  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:45.691120  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:45.716374  405191 cri.go:89] found id: ""
	I1206 10:54:45.716388  405191 logs.go:282] 0 containers: []
	W1206 10:54:45.716395  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:45.716400  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:45.716461  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:45.742083  405191 cri.go:89] found id: ""
	I1206 10:54:45.742097  405191 logs.go:282] 0 containers: []
	W1206 10:54:45.742105  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:45.742110  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:45.742177  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:45.767269  405191 cri.go:89] found id: ""
	I1206 10:54:45.767282  405191 logs.go:282] 0 containers: []
	W1206 10:54:45.767290  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:45.767295  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:45.767352  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:45.793130  405191 cri.go:89] found id: ""
	I1206 10:54:45.793144  405191 logs.go:282] 0 containers: []
	W1206 10:54:45.793151  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:45.793158  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:45.793169  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:45.822623  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:45.822639  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:45.889014  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:45.889036  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:45.903697  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:45.903713  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:45.967833  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:45.959169   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:45.960025   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:45.961643   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:45.962228   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:45.963959   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:45.959169   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:45.960025   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:45.961643   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:45.962228   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:45.963959   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:45.967843  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:45.967854  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:48.539593  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:48.549488  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:48.549547  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:48.578962  405191 cri.go:89] found id: ""
	I1206 10:54:48.578976  405191 logs.go:282] 0 containers: []
	W1206 10:54:48.578983  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:48.578989  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:48.579060  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:48.604320  405191 cri.go:89] found id: ""
	I1206 10:54:48.604335  405191 logs.go:282] 0 containers: []
	W1206 10:54:48.604342  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:48.604347  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:48.604407  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:48.630562  405191 cri.go:89] found id: ""
	I1206 10:54:48.630575  405191 logs.go:282] 0 containers: []
	W1206 10:54:48.630583  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:48.630588  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:48.630645  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:48.659186  405191 cri.go:89] found id: ""
	I1206 10:54:48.659200  405191 logs.go:282] 0 containers: []
	W1206 10:54:48.659207  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:48.659218  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:48.659278  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:48.686349  405191 cri.go:89] found id: ""
	I1206 10:54:48.686363  405191 logs.go:282] 0 containers: []
	W1206 10:54:48.686371  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:48.686376  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:48.686433  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:48.712958  405191 cri.go:89] found id: ""
	I1206 10:54:48.712973  405191 logs.go:282] 0 containers: []
	W1206 10:54:48.712980  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:48.712985  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:48.713045  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:48.738763  405191 cri.go:89] found id: ""
	I1206 10:54:48.738777  405191 logs.go:282] 0 containers: []
	W1206 10:54:48.738783  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:48.738791  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:48.738801  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:48.753416  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:48.753431  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:48.818598  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:48.810121   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:48.810830   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:48.812598   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:48.813183   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:48.814760   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:48.810121   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:48.810830   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:48.812598   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:48.813183   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:48.814760   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:48.818609  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:48.818620  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:48.888023  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:48.888043  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:48.917094  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:48.917110  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:51.485627  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:51.497092  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:51.497157  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:51.529254  405191 cri.go:89] found id: ""
	I1206 10:54:51.529268  405191 logs.go:282] 0 containers: []
	W1206 10:54:51.529275  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:51.529281  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:51.529340  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:51.555292  405191 cri.go:89] found id: ""
	I1206 10:54:51.555305  405191 logs.go:282] 0 containers: []
	W1206 10:54:51.555312  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:51.555316  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:51.555390  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:51.580443  405191 cri.go:89] found id: ""
	I1206 10:54:51.580458  405191 logs.go:282] 0 containers: []
	W1206 10:54:51.580465  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:51.580470  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:51.580529  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:51.605907  405191 cri.go:89] found id: ""
	I1206 10:54:51.605921  405191 logs.go:282] 0 containers: []
	W1206 10:54:51.605928  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:51.605933  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:51.605991  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:51.630731  405191 cri.go:89] found id: ""
	I1206 10:54:51.630745  405191 logs.go:282] 0 containers: []
	W1206 10:54:51.630752  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:51.630757  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:51.630816  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:51.655906  405191 cri.go:89] found id: ""
	I1206 10:54:51.655919  405191 logs.go:282] 0 containers: []
	W1206 10:54:51.655926  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:51.655931  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:51.655987  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:51.681242  405191 cri.go:89] found id: ""
	I1206 10:54:51.681256  405191 logs.go:282] 0 containers: []
	W1206 10:54:51.681267  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:51.681275  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:51.681285  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:51.750829  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:51.750849  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:51.766064  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:51.766080  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:51.831905  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:51.823637   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:51.824299   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:51.825840   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:51.826394   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:51.827960   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:51.823637   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:51.824299   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:51.825840   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:51.826394   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:51.827960   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:51.831915  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:51.831925  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:51.901462  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:51.901484  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:54.431319  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:54.441623  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:54.441686  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:54.470441  405191 cri.go:89] found id: ""
	I1206 10:54:54.470456  405191 logs.go:282] 0 containers: []
	W1206 10:54:54.470463  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:54.470469  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:54.470527  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:54.505844  405191 cri.go:89] found id: ""
	I1206 10:54:54.505858  405191 logs.go:282] 0 containers: []
	W1206 10:54:54.505865  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:54.505870  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:54.505931  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:54.540765  405191 cri.go:89] found id: ""
	I1206 10:54:54.540779  405191 logs.go:282] 0 containers: []
	W1206 10:54:54.540786  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:54.540791  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:54.540859  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:54.568534  405191 cri.go:89] found id: ""
	I1206 10:54:54.568559  405191 logs.go:282] 0 containers: []
	W1206 10:54:54.568566  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:54.568571  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:54.568631  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:54.598488  405191 cri.go:89] found id: ""
	I1206 10:54:54.598501  405191 logs.go:282] 0 containers: []
	W1206 10:54:54.598508  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:54.598513  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:54.598573  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:54.625601  405191 cri.go:89] found id: ""
	I1206 10:54:54.625615  405191 logs.go:282] 0 containers: []
	W1206 10:54:54.625622  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:54.625627  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:54.625684  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:54.651039  405191 cri.go:89] found id: ""
	I1206 10:54:54.651053  405191 logs.go:282] 0 containers: []
	W1206 10:54:54.651069  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:54.651077  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:54.651088  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:54.721711  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:54.712700   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:54.713574   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:54.715366   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:54.715761   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:54.717298   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:54.712700   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:54.713574   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:54.715366   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:54.715761   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:54.717298   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:54.721724  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:54.721734  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:54.793778  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:54.793803  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:54.825565  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:54.825580  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:54.891107  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:54.891127  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:57.406177  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:57.416168  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:57.416231  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:57.444260  405191 cri.go:89] found id: ""
	I1206 10:54:57.444274  405191 logs.go:282] 0 containers: []
	W1206 10:54:57.444281  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:57.444286  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:57.444352  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:57.473921  405191 cri.go:89] found id: ""
	I1206 10:54:57.473935  405191 logs.go:282] 0 containers: []
	W1206 10:54:57.473942  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:57.473947  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:57.474006  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:57.507969  405191 cri.go:89] found id: ""
	I1206 10:54:57.507983  405191 logs.go:282] 0 containers: []
	W1206 10:54:57.507990  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:57.507995  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:57.508057  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:57.536405  405191 cri.go:89] found id: ""
	I1206 10:54:57.536420  405191 logs.go:282] 0 containers: []
	W1206 10:54:57.536428  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:57.536433  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:57.536502  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:57.564180  405191 cri.go:89] found id: ""
	I1206 10:54:57.564194  405191 logs.go:282] 0 containers: []
	W1206 10:54:57.564201  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:57.564206  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:57.564271  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:57.594665  405191 cri.go:89] found id: ""
	I1206 10:54:57.594679  405191 logs.go:282] 0 containers: []
	W1206 10:54:57.594687  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:57.594692  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:57.594751  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:57.627345  405191 cri.go:89] found id: ""
	I1206 10:54:57.627360  405191 logs.go:282] 0 containers: []
	W1206 10:54:57.627367  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:57.627398  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:57.627409  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:57.694026  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:57.694046  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:57.708621  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:57.708636  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:57.772743  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:57.764569   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:57.765305   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:57.766828   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:57.767291   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:57.768789   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:57.764569   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:57.765305   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:57.766828   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:57.767291   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:57.768789   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:57.772753  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:57.772764  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:57.841816  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:57.841836  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:00.375636  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:00.396560  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:00.396634  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:00.458455  405191 cri.go:89] found id: ""
	I1206 10:55:00.458471  405191 logs.go:282] 0 containers: []
	W1206 10:55:00.458479  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:00.458485  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:00.458553  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:00.497287  405191 cri.go:89] found id: ""
	I1206 10:55:00.497304  405191 logs.go:282] 0 containers: []
	W1206 10:55:00.497311  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:00.497317  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:00.497382  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:00.531076  405191 cri.go:89] found id: ""
	I1206 10:55:00.531092  405191 logs.go:282] 0 containers: []
	W1206 10:55:00.531099  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:00.531104  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:00.531172  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:00.567464  405191 cri.go:89] found id: ""
	I1206 10:55:00.567485  405191 logs.go:282] 0 containers: []
	W1206 10:55:00.567493  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:00.567499  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:00.567600  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:00.600497  405191 cri.go:89] found id: ""
	I1206 10:55:00.600512  405191 logs.go:282] 0 containers: []
	W1206 10:55:00.600520  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:00.600526  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:00.600596  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:00.648830  405191 cri.go:89] found id: ""
	I1206 10:55:00.648852  405191 logs.go:282] 0 containers: []
	W1206 10:55:00.648861  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:00.648868  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:00.648939  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:00.678773  405191 cri.go:89] found id: ""
	I1206 10:55:00.678789  405191 logs.go:282] 0 containers: []
	W1206 10:55:00.678797  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:00.678822  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:00.678834  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:00.748615  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:00.748637  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:00.764401  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:00.764420  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:00.836152  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:00.827231   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:00.828085   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:00.830005   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:00.830399   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:00.832026   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:00.827231   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:00.828085   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:00.830005   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:00.830399   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:00.832026   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:00.836163  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:00.836174  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:00.909732  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:00.909761  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:03.441095  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:03.451635  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:03.451701  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:03.486201  405191 cri.go:89] found id: ""
	I1206 10:55:03.486214  405191 logs.go:282] 0 containers: []
	W1206 10:55:03.486222  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:03.486226  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:03.486286  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:03.530153  405191 cri.go:89] found id: ""
	I1206 10:55:03.530167  405191 logs.go:282] 0 containers: []
	W1206 10:55:03.530174  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:03.530179  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:03.530243  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:03.559790  405191 cri.go:89] found id: ""
	I1206 10:55:03.559804  405191 logs.go:282] 0 containers: []
	W1206 10:55:03.559811  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:03.559816  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:03.559874  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:03.586392  405191 cri.go:89] found id: ""
	I1206 10:55:03.586406  405191 logs.go:282] 0 containers: []
	W1206 10:55:03.586413  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:03.586418  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:03.586477  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:03.612699  405191 cri.go:89] found id: ""
	I1206 10:55:03.612714  405191 logs.go:282] 0 containers: []
	W1206 10:55:03.612726  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:03.612732  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:03.612827  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:03.641895  405191 cri.go:89] found id: ""
	I1206 10:55:03.641909  405191 logs.go:282] 0 containers: []
	W1206 10:55:03.641916  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:03.641921  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:03.641978  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:03.668194  405191 cri.go:89] found id: ""
	I1206 10:55:03.668208  405191 logs.go:282] 0 containers: []
	W1206 10:55:03.668216  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:03.668224  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:03.668234  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:03.738567  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:03.738585  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:03.753715  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:03.753732  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:03.819356  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:03.811487   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:03.812006   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:03.813500   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:03.813921   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:03.815528   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:03.811487   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:03.812006   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:03.813500   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:03.813921   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:03.815528   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:03.819368  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:03.819393  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:03.888845  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:03.888866  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:06.421279  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:06.431630  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:06.431691  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:06.457432  405191 cri.go:89] found id: ""
	I1206 10:55:06.457446  405191 logs.go:282] 0 containers: []
	W1206 10:55:06.457453  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:06.457458  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:06.457525  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:06.498897  405191 cri.go:89] found id: ""
	I1206 10:55:06.498911  405191 logs.go:282] 0 containers: []
	W1206 10:55:06.498918  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:06.498923  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:06.498994  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:06.532288  405191 cri.go:89] found id: ""
	I1206 10:55:06.532320  405191 logs.go:282] 0 containers: []
	W1206 10:55:06.532328  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:06.532332  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:06.532403  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:06.558737  405191 cri.go:89] found id: ""
	I1206 10:55:06.558751  405191 logs.go:282] 0 containers: []
	W1206 10:55:06.558758  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:06.558764  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:06.558835  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:06.588791  405191 cri.go:89] found id: ""
	I1206 10:55:06.588805  405191 logs.go:282] 0 containers: []
	W1206 10:55:06.588813  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:06.588818  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:06.588887  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:06.615097  405191 cri.go:89] found id: ""
	I1206 10:55:06.615110  405191 logs.go:282] 0 containers: []
	W1206 10:55:06.615117  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:06.615122  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:06.615182  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:06.640273  405191 cri.go:89] found id: ""
	I1206 10:55:06.640297  405191 logs.go:282] 0 containers: []
	W1206 10:55:06.640305  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:06.640312  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:06.640323  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:06.709781  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:06.709800  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:06.724307  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:06.724323  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:06.788894  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:06.780020   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:06.780621   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:06.782266   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:06.782823   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:06.784380   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:06.780020   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:06.780621   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:06.782266   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:06.782823   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:06.784380   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:06.788903  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:06.788913  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:06.857942  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:06.857963  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:09.392819  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:09.402617  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:09.402675  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:09.429928  405191 cri.go:89] found id: ""
	I1206 10:55:09.429942  405191 logs.go:282] 0 containers: []
	W1206 10:55:09.429949  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:09.429955  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:09.430018  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:09.455893  405191 cri.go:89] found id: ""
	I1206 10:55:09.455907  405191 logs.go:282] 0 containers: []
	W1206 10:55:09.455913  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:09.455918  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:09.455975  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:09.492759  405191 cri.go:89] found id: ""
	I1206 10:55:09.492772  405191 logs.go:282] 0 containers: []
	W1206 10:55:09.492779  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:09.492784  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:09.492842  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:09.524405  405191 cri.go:89] found id: ""
	I1206 10:55:09.524418  405191 logs.go:282] 0 containers: []
	W1206 10:55:09.524425  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:09.524430  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:09.524488  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:09.555465  405191 cri.go:89] found id: ""
	I1206 10:55:09.555479  405191 logs.go:282] 0 containers: []
	W1206 10:55:09.555486  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:09.555491  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:09.555551  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:09.582561  405191 cri.go:89] found id: ""
	I1206 10:55:09.582575  405191 logs.go:282] 0 containers: []
	W1206 10:55:09.582582  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:09.582588  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:09.582646  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:09.608767  405191 cri.go:89] found id: ""
	I1206 10:55:09.608781  405191 logs.go:282] 0 containers: []
	W1206 10:55:09.608788  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:09.608796  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:09.608810  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:09.677518  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:09.677539  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:09.692935  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:09.692955  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:09.760066  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:09.750973   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:09.751783   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:09.753612   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:09.754387   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:09.755955   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:09.750973   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:09.751783   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:09.753612   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:09.754387   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:09.755955   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:09.760077  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:09.760087  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:09.829605  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:09.829626  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:12.359607  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:12.370647  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:12.370708  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:12.402338  405191 cri.go:89] found id: ""
	I1206 10:55:12.402353  405191 logs.go:282] 0 containers: []
	W1206 10:55:12.402361  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:12.402366  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:12.402435  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:12.428498  405191 cri.go:89] found id: ""
	I1206 10:55:12.428513  405191 logs.go:282] 0 containers: []
	W1206 10:55:12.428520  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:12.428525  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:12.428587  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:12.454311  405191 cri.go:89] found id: ""
	I1206 10:55:12.454325  405191 logs.go:282] 0 containers: []
	W1206 10:55:12.454333  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:12.454338  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:12.454399  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:12.493402  405191 cri.go:89] found id: ""
	I1206 10:55:12.493416  405191 logs.go:282] 0 containers: []
	W1206 10:55:12.493423  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:12.493429  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:12.493487  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:12.527015  405191 cri.go:89] found id: ""
	I1206 10:55:12.527029  405191 logs.go:282] 0 containers: []
	W1206 10:55:12.527036  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:12.527042  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:12.527103  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:12.556788  405191 cri.go:89] found id: ""
	I1206 10:55:12.556812  405191 logs.go:282] 0 containers: []
	W1206 10:55:12.556820  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:12.556825  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:12.556897  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:12.584336  405191 cri.go:89] found id: ""
	I1206 10:55:12.584350  405191 logs.go:282] 0 containers: []
	W1206 10:55:12.584357  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:12.584365  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:12.584376  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:12.614039  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:12.614055  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:12.680316  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:12.680338  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:12.696525  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:12.696542  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:12.760110  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:12.751882   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:12.752591   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:12.754143   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:12.754484   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:12.756046   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:12.751882   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:12.752591   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:12.754143   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:12.754484   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:12.756046   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:12.760120  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:12.760131  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:15.332168  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:15.342873  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:15.342950  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:15.371175  405191 cri.go:89] found id: ""
	I1206 10:55:15.371189  405191 logs.go:282] 0 containers: []
	W1206 10:55:15.371207  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:15.371212  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:15.371279  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:15.397085  405191 cri.go:89] found id: ""
	I1206 10:55:15.397100  405191 logs.go:282] 0 containers: []
	W1206 10:55:15.397107  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:15.397112  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:15.397171  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:15.422142  405191 cri.go:89] found id: ""
	I1206 10:55:15.422156  405191 logs.go:282] 0 containers: []
	W1206 10:55:15.422163  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:15.422174  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:15.422231  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:15.447127  405191 cri.go:89] found id: ""
	I1206 10:55:15.447141  405191 logs.go:282] 0 containers: []
	W1206 10:55:15.447148  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:15.447154  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:15.447212  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:15.477786  405191 cri.go:89] found id: ""
	I1206 10:55:15.477800  405191 logs.go:282] 0 containers: []
	W1206 10:55:15.477808  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:15.477813  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:15.477875  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:15.507270  405191 cri.go:89] found id: ""
	I1206 10:55:15.507285  405191 logs.go:282] 0 containers: []
	W1206 10:55:15.507292  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:15.507297  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:15.507360  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:15.536433  405191 cri.go:89] found id: ""
	I1206 10:55:15.536451  405191 logs.go:282] 0 containers: []
	W1206 10:55:15.536458  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:15.536470  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:15.536480  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:15.608040  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:15.608061  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:15.623617  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:15.623635  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:15.692548  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:15.684603   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:15.685140   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:15.686901   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:15.687564   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:15.688573   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:15.684603   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:15.685140   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:15.686901   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:15.687564   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:15.688573   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:15.692558  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:15.692581  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:15.760517  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:15.760537  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:18.289173  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:18.300544  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:18.300610  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:18.327678  405191 cri.go:89] found id: ""
	I1206 10:55:18.327692  405191 logs.go:282] 0 containers: []
	W1206 10:55:18.327699  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:18.327704  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:18.327764  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:18.353999  405191 cri.go:89] found id: ""
	I1206 10:55:18.354014  405191 logs.go:282] 0 containers: []
	W1206 10:55:18.354021  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:18.354026  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:18.354084  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:18.382276  405191 cri.go:89] found id: ""
	I1206 10:55:18.382291  405191 logs.go:282] 0 containers: []
	W1206 10:55:18.382298  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:18.382304  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:18.382365  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:18.410827  405191 cri.go:89] found id: ""
	I1206 10:55:18.410841  405191 logs.go:282] 0 containers: []
	W1206 10:55:18.410847  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:18.410852  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:18.410911  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:18.436138  405191 cri.go:89] found id: ""
	I1206 10:55:18.436160  405191 logs.go:282] 0 containers: []
	W1206 10:55:18.436167  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:18.436172  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:18.436233  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:18.462254  405191 cri.go:89] found id: ""
	I1206 10:55:18.462269  405191 logs.go:282] 0 containers: []
	W1206 10:55:18.462276  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:18.462283  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:18.462346  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:18.492347  405191 cri.go:89] found id: ""
	I1206 10:55:18.492362  405191 logs.go:282] 0 containers: []
	W1206 10:55:18.492369  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:18.492377  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:18.492388  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:18.509956  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:18.509973  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:18.581031  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:18.572020   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:18.572812   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:18.573929   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:18.574912   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:18.575787   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:18.572020   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:18.572812   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:18.573929   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:18.574912   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:18.575787   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:18.581041  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:18.581055  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:18.650942  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:18.650963  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:18.680668  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:18.680685  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:21.248379  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:21.258903  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:21.258982  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:21.286273  405191 cri.go:89] found id: ""
	I1206 10:55:21.286288  405191 logs.go:282] 0 containers: []
	W1206 10:55:21.286295  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:21.286300  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:21.286357  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:21.311824  405191 cri.go:89] found id: ""
	I1206 10:55:21.311841  405191 logs.go:282] 0 containers: []
	W1206 10:55:21.311851  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:21.311857  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:21.311923  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:21.338690  405191 cri.go:89] found id: ""
	I1206 10:55:21.338704  405191 logs.go:282] 0 containers: []
	W1206 10:55:21.338711  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:21.338716  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:21.338773  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:21.365841  405191 cri.go:89] found id: ""
	I1206 10:55:21.365855  405191 logs.go:282] 0 containers: []
	W1206 10:55:21.365862  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:21.365868  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:21.365926  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:21.396001  405191 cri.go:89] found id: ""
	I1206 10:55:21.396035  405191 logs.go:282] 0 containers: []
	W1206 10:55:21.396043  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:21.396049  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:21.396118  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:21.421823  405191 cri.go:89] found id: ""
	I1206 10:55:21.421837  405191 logs.go:282] 0 containers: []
	W1206 10:55:21.421856  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:21.421862  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:21.421934  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:21.449590  405191 cri.go:89] found id: ""
	I1206 10:55:21.449604  405191 logs.go:282] 0 containers: []
	W1206 10:55:21.449611  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:21.449619  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:21.449631  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:21.464618  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:21.464634  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:21.543901  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:21.526696   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:21.535561   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:21.536267   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:21.537985   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:21.538498   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:21.526696   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:21.535561   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:21.536267   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:21.537985   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:21.538498   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:21.543913  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:21.543926  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:21.614646  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:21.614669  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:21.645809  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:21.645825  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:24.214037  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:24.226008  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:24.226071  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:24.252473  405191 cri.go:89] found id: ""
	I1206 10:55:24.252487  405191 logs.go:282] 0 containers: []
	W1206 10:55:24.252495  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:24.252500  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:24.252560  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:24.280242  405191 cri.go:89] found id: ""
	I1206 10:55:24.280256  405191 logs.go:282] 0 containers: []
	W1206 10:55:24.280263  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:24.280268  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:24.280328  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:24.307083  405191 cri.go:89] found id: ""
	I1206 10:55:24.307098  405191 logs.go:282] 0 containers: []
	W1206 10:55:24.307105  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:24.307111  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:24.307181  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:24.333215  405191 cri.go:89] found id: ""
	I1206 10:55:24.333230  405191 logs.go:282] 0 containers: []
	W1206 10:55:24.333239  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:24.333245  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:24.333312  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:24.364248  405191 cri.go:89] found id: ""
	I1206 10:55:24.364262  405191 logs.go:282] 0 containers: []
	W1206 10:55:24.364269  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:24.364275  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:24.364340  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:24.392539  405191 cri.go:89] found id: ""
	I1206 10:55:24.392554  405191 logs.go:282] 0 containers: []
	W1206 10:55:24.392561  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:24.392567  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:24.392631  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:24.419045  405191 cri.go:89] found id: ""
	I1206 10:55:24.419059  405191 logs.go:282] 0 containers: []
	W1206 10:55:24.419066  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:24.419074  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:24.419084  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:24.485101  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:24.485123  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:24.506235  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:24.506258  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:24.586208  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:24.577740   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:24.578227   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:24.579907   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:24.580253   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:24.581928   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:24.577740   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:24.578227   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:24.579907   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:24.580253   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:24.581928   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:24.586218  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:24.586230  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:24.654219  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:24.654241  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:27.183198  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:27.194048  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:27.194116  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:27.223948  405191 cri.go:89] found id: ""
	I1206 10:55:27.223962  405191 logs.go:282] 0 containers: []
	W1206 10:55:27.223969  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:27.223974  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:27.224033  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:27.255792  405191 cri.go:89] found id: ""
	I1206 10:55:27.255807  405191 logs.go:282] 0 containers: []
	W1206 10:55:27.255814  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:27.255819  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:27.255882  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:27.285352  405191 cri.go:89] found id: ""
	I1206 10:55:27.285365  405191 logs.go:282] 0 containers: []
	W1206 10:55:27.285373  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:27.285380  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:27.285438  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:27.311572  405191 cri.go:89] found id: ""
	I1206 10:55:27.311599  405191 logs.go:282] 0 containers: []
	W1206 10:55:27.311606  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:27.311612  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:27.311684  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:27.337727  405191 cri.go:89] found id: ""
	I1206 10:55:27.337741  405191 logs.go:282] 0 containers: []
	W1206 10:55:27.337747  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:27.337753  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:27.337812  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:27.363513  405191 cri.go:89] found id: ""
	I1206 10:55:27.363527  405191 logs.go:282] 0 containers: []
	W1206 10:55:27.363534  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:27.363539  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:27.363611  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:27.390072  405191 cri.go:89] found id: ""
	I1206 10:55:27.390100  405191 logs.go:282] 0 containers: []
	W1206 10:55:27.390107  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:27.390115  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:27.390130  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:27.456548  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:27.456567  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:27.472626  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:27.472642  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:27.554055  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:27.545736   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:27.546254   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:27.548095   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:27.548446   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:27.550070   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:27.545736   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:27.546254   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:27.548095   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:27.548446   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:27.550070   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:27.554065  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:27.554076  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:27.622961  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:27.622984  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:30.156731  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:30.168052  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:30.168115  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:30.195190  405191 cri.go:89] found id: ""
	I1206 10:55:30.195205  405191 logs.go:282] 0 containers: []
	W1206 10:55:30.195237  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:30.195243  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:30.195315  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:30.222581  405191 cri.go:89] found id: ""
	I1206 10:55:30.222615  405191 logs.go:282] 0 containers: []
	W1206 10:55:30.222622  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:30.222628  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:30.222697  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:30.251144  405191 cri.go:89] found id: ""
	I1206 10:55:30.251162  405191 logs.go:282] 0 containers: []
	W1206 10:55:30.251173  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:30.251178  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:30.251280  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:30.282704  405191 cri.go:89] found id: ""
	I1206 10:55:30.282731  405191 logs.go:282] 0 containers: []
	W1206 10:55:30.282739  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:30.282744  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:30.282818  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:30.308787  405191 cri.go:89] found id: ""
	I1206 10:55:30.308802  405191 logs.go:282] 0 containers: []
	W1206 10:55:30.308809  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:30.308814  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:30.308881  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:30.334479  405191 cri.go:89] found id: ""
	I1206 10:55:30.334494  405191 logs.go:282] 0 containers: []
	W1206 10:55:30.334501  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:30.334507  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:30.334582  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:30.361350  405191 cri.go:89] found id: ""
	I1206 10:55:30.361365  405191 logs.go:282] 0 containers: []
	W1206 10:55:30.361372  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:30.361380  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:30.361390  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:30.438089  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:30.438120  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:30.453200  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:30.453217  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:30.539250  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:30.524592   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:30.527641   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:30.528089   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:30.529752   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:30.530427   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:30.524592   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:30.527641   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:30.528089   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:30.529752   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:30.530427   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:30.539272  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:30.539285  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:30.610101  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:30.610121  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:33.143484  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:33.153906  405191 kubeadm.go:602] duration metric: took 4m2.63956924s to restartPrimaryControlPlane
	W1206 10:55:33.153970  405191 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1206 10:55:33.154044  405191 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1206 10:55:33.564051  405191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:55:33.577264  405191 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 10:55:33.585285  405191 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:55:33.585343  405191 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:55:33.593207  405191 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:55:33.593217  405191 kubeadm.go:158] found existing configuration files:
	
	I1206 10:55:33.593284  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:55:33.601281  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:55:33.601338  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:55:33.609078  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:55:33.617336  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:55:33.617395  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:55:33.625100  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:55:33.633096  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:55:33.633153  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:55:33.640767  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:55:33.648692  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:55:33.648783  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:55:33.656355  405191 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:55:33.695114  405191 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 10:55:33.695495  405191 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:55:33.776558  405191 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:55:33.776622  405191 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:55:33.776656  405191 kubeadm.go:319] OS: Linux
	I1206 10:55:33.776700  405191 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:55:33.776747  405191 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:55:33.776793  405191 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:55:33.776839  405191 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:55:33.776886  405191 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:55:33.776933  405191 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:55:33.776976  405191 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:55:33.777023  405191 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:55:33.777067  405191 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:55:33.839562  405191 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:55:33.839700  405191 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:55:33.839825  405191 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:55:33.847872  405191 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:55:33.851528  405191 out.go:252]   - Generating certificates and keys ...
	I1206 10:55:33.851642  405191 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:55:33.851732  405191 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:55:33.851823  405191 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 10:55:33.851888  405191 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 10:55:33.851963  405191 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 10:55:33.852020  405191 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 10:55:33.852092  405191 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 10:55:33.852157  405191 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 10:55:33.852236  405191 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 10:55:33.852314  405191 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 10:55:33.852354  405191 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 10:55:33.852412  405191 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:55:34.131310  405191 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:55:34.288855  405191 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:55:34.553487  405191 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:55:35.148231  405191 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:55:35.211116  405191 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:55:35.211864  405191 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:55:35.214714  405191 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:55:35.218231  405191 out.go:252]   - Booting up control plane ...
	I1206 10:55:35.218330  405191 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:55:35.218406  405191 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:55:35.218472  405191 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:55:35.235870  405191 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:55:35.235976  405191 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:55:35.244902  405191 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:55:35.245320  405191 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:55:35.245379  405191 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:55:35.375634  405191 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:55:35.375747  405191 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:59:35.374512  405191 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000270227s
	I1206 10:59:35.374544  405191 kubeadm.go:319] 
	I1206 10:59:35.374605  405191 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 10:59:35.374643  405191 kubeadm.go:319] 	- The kubelet is not running
	I1206 10:59:35.374758  405191 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 10:59:35.374763  405191 kubeadm.go:319] 
	I1206 10:59:35.374876  405191 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 10:59:35.374910  405191 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 10:59:35.374942  405191 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 10:59:35.374945  405191 kubeadm.go:319] 
	I1206 10:59:35.380563  405191 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 10:59:35.380998  405191 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 10:59:35.381115  405191 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:59:35.381348  405191 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 10:59:35.381353  405191 kubeadm.go:319] 
	I1206 10:59:35.381420  405191 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1206 10:59:35.381523  405191 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000270227s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1206 10:59:35.381613  405191 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1206 10:59:35.796714  405191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:59:35.809334  405191 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:59:35.809388  405191 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:59:35.817444  405191 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:59:35.817452  405191 kubeadm.go:158] found existing configuration files:
	
	I1206 10:59:35.817502  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:59:35.825442  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:59:35.825501  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:59:35.833082  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:59:35.842093  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:59:35.842159  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:59:35.851759  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:59:35.860099  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:59:35.860161  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:59:35.867900  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:59:35.876130  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:59:35.876188  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:59:35.884013  405191 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:59:35.926383  405191 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 10:59:35.926438  405191 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:59:36.016832  405191 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:59:36.016925  405191 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:59:36.016974  405191 kubeadm.go:319] OS: Linux
	I1206 10:59:36.017019  405191 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:59:36.017071  405191 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:59:36.017119  405191 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:59:36.017173  405191 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:59:36.017220  405191 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:59:36.017277  405191 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:59:36.017339  405191 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:59:36.017401  405191 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:59:36.017447  405191 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:59:36.080832  405191 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:59:36.080951  405191 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:59:36.081048  405191 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:59:36.091906  405191 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:59:36.097223  405191 out.go:252]   - Generating certificates and keys ...
	I1206 10:59:36.097345  405191 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:59:36.097426  405191 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:59:36.097511  405191 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 10:59:36.097596  405191 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 10:59:36.097675  405191 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 10:59:36.097750  405191 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 10:59:36.097815  405191 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 10:59:36.097876  405191 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 10:59:36.097954  405191 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 10:59:36.098026  405191 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 10:59:36.098063  405191 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 10:59:36.098122  405191 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:59:36.705762  405191 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:59:36.885173  405191 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:59:37.204953  405191 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:59:37.715956  405191 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:59:37.848965  405191 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:59:37.849735  405191 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:59:37.853600  405191 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:59:37.856590  405191 out.go:252]   - Booting up control plane ...
	I1206 10:59:37.856698  405191 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:59:37.856819  405191 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:59:37.858671  405191 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:59:37.873039  405191 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:59:37.873143  405191 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:59:37.880838  405191 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:59:37.881129  405191 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:59:37.881370  405191 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:59:38.015956  405191 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:59:38.016070  405191 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 11:03:38.011572  405191 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000448393s
	I1206 11:03:38.011605  405191 kubeadm.go:319] 
	I1206 11:03:38.011721  405191 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 11:03:38.011777  405191 kubeadm.go:319] 	- The kubelet is not running
	I1206 11:03:38.012051  405191 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 11:03:38.012060  405191 kubeadm.go:319] 
	I1206 11:03:38.012421  405191 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 11:03:38.012573  405191 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 11:03:38.012628  405191 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 11:03:38.012633  405191 kubeadm.go:319] 
	I1206 11:03:38.018189  405191 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 11:03:38.018608  405191 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 11:03:38.018716  405191 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 11:03:38.018960  405191 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 11:03:38.018965  405191 kubeadm.go:319] 
	I1206 11:03:38.019033  405191 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 11:03:38.019089  405191 kubeadm.go:403] duration metric: took 12m7.551905569s to StartCluster
	I1206 11:03:38.019121  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:03:38.019191  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:03:38.048894  405191 cri.go:89] found id: ""
	I1206 11:03:38.048909  405191 logs.go:282] 0 containers: []
	W1206 11:03:38.048917  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:03:38.048922  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:03:38.049009  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:03:38.077125  405191 cri.go:89] found id: ""
	I1206 11:03:38.077141  405191 logs.go:282] 0 containers: []
	W1206 11:03:38.077149  405191 logs.go:284] No container was found matching "etcd"
	I1206 11:03:38.077154  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:03:38.077229  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:03:38.104859  405191 cri.go:89] found id: ""
	I1206 11:03:38.104873  405191 logs.go:282] 0 containers: []
	W1206 11:03:38.104881  405191 logs.go:284] No container was found matching "coredns"
	I1206 11:03:38.104886  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:03:38.104946  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:03:38.131268  405191 cri.go:89] found id: ""
	I1206 11:03:38.131282  405191 logs.go:282] 0 containers: []
	W1206 11:03:38.131289  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:03:38.131295  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:03:38.131356  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:03:38.161469  405191 cri.go:89] found id: ""
	I1206 11:03:38.161483  405191 logs.go:282] 0 containers: []
	W1206 11:03:38.161490  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:03:38.161495  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:03:38.161555  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:03:38.191440  405191 cri.go:89] found id: ""
	I1206 11:03:38.191454  405191 logs.go:282] 0 containers: []
	W1206 11:03:38.191461  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:03:38.191467  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:03:38.191536  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:03:38.219921  405191 cri.go:89] found id: ""
	I1206 11:03:38.219935  405191 logs.go:282] 0 containers: []
	W1206 11:03:38.219943  405191 logs.go:284] No container was found matching "kindnet"
	I1206 11:03:38.219951  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:03:38.219962  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:03:38.285137  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:03:38.277076   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:38.277519   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:38.279007   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:38.279647   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:38.281164   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:03:38.277076   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:38.277519   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:38.279007   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:38.279647   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:38.281164   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:03:38.285157  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:03:38.285169  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:03:38.355235  405191 logs.go:123] Gathering logs for container status ...
	I1206 11:03:38.355259  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:03:38.391661  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 11:03:38.391679  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:03:38.462714  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 11:03:38.462733  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1206 11:03:38.480853  405191 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000448393s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 11:03:38.480894  405191 out.go:285] * 
	W1206 11:03:38.480951  405191 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000448393s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 11:03:38.480964  405191 out.go:285] * 
	W1206 11:03:38.483093  405191 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 11:03:38.488282  405191 out.go:203] 
	W1206 11:03:38.491978  405191 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000448393s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 11:03:38.492089  405191 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 11:03:38.492161  405191 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 11:03:38.495164  405191 out.go:203] 
	
	
	==> CRI-O <==
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.084499828Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=f0fd0946-b323-435f-946c-e412850eb9c0 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.085495997Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=34b6bb47-44a4-4780-9567-04c497973fa7 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.08608111Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=e26253f3-5094-4fbe-b6d1-306f2e31fa9a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.086661177Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=f8a4ffa3-a2d3-4c05-ba97-fd167ad1ff4e name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.087187984Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=43819321-fbd2-4155-9f0b-c716c27fc9ce name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.087957288Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=aeb2085e-c4e7-4d42-9049-5042f515cbdb name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.088481658Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=0f01a640-f6a6-41dd-afc3-f5cae208f89a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.079725515Z" level=info msg="Checking image status: kicbase/echo-server:functional-196950" id=f87ee22d-4408-46c2-8930-0ab8ba7ffa52 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.079908966Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.079954112Z" level=info msg="Image kicbase/echo-server:functional-196950 not found" id=f87ee22d-4408-46c2-8930-0ab8ba7ffa52 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.080016438Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-196950 found" id=f87ee22d-4408-46c2-8930-0ab8ba7ffa52 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.114067912Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-196950" id=da9dda5e-17fc-43e5-a93c-9057adb4fa98 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.114216122Z" level=info msg="Image docker.io/kicbase/echo-server:functional-196950 not found" id=da9dda5e-17fc-43e5-a93c-9057adb4fa98 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.114253587Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-196950 found" id=da9dda5e-17fc-43e5-a93c-9057adb4fa98 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.143186327Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-196950" id=629f3d91-fd66-4652-88b6-cbe010464984 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.143320859Z" level=info msg="Image localhost/kicbase/echo-server:functional-196950 not found" id=629f3d91-fd66-4652-88b6-cbe010464984 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.143363608Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-196950 found" id=629f3d91-fd66-4652-88b6-cbe010464984 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:52 functional-196950 crio[9931]: time="2025-12-06T11:03:52.005475943Z" level=info msg="Checking image status: kicbase/echo-server:functional-196950" id=c24c3057-b5d3-4f9d-ac3b-5bbc48ff2411 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:52 functional-196950 crio[9931]: time="2025-12-06T11:03:52.005746649Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 06 11:03:52 functional-196950 crio[9931]: time="2025-12-06T11:03:52.005815245Z" level=info msg="Image kicbase/echo-server:functional-196950 not found" id=c24c3057-b5d3-4f9d-ac3b-5bbc48ff2411 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:52 functional-196950 crio[9931]: time="2025-12-06T11:03:52.005906667Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-196950 found" id=c24c3057-b5d3-4f9d-ac3b-5bbc48ff2411 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:52 functional-196950 crio[9931]: time="2025-12-06T11:03:52.062141238Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-196950" id=2c21deca-fd70-4153-95f4-22abbc99a70a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:52 functional-196950 crio[9931]: time="2025-12-06T11:03:52.062283549Z" level=info msg="Image docker.io/kicbase/echo-server:functional-196950 not found" id=2c21deca-fd70-4153-95f4-22abbc99a70a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:52 functional-196950 crio[9931]: time="2025-12-06T11:03:52.062324296Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-196950 found" id=2c21deca-fd70-4153-95f4-22abbc99a70a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:52 functional-196950 crio[9931]: time="2025-12-06T11:03:52.089472813Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-196950" id=a99d8ece-a491-4b6e-b578-7c4c50168ae2 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:05:29.867982   23271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:05:29.868615   23271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:05:29.870110   23271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:05:29.870569   23271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:05:29.872066   23271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014752] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503231] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.065820] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.901896] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.944603] kauditd_printk_skb: 39 callbacks suppressed
	[Dec 6 09:04] hrtimer: interrupt took 32230230 ns
	[Dec 6 10:25] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 6 10:26] overlayfs: idmapped layers are currently not supported
	[  +0.066821] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 6 10:32] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:33] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:51] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 11:05:29 up  2:48,  0 user,  load average: 0.30, 0.34, 0.52
	Linux functional-196950 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 11:05:27 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:05:28 functional-196950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1948.
	Dec 06 11:05:28 functional-196950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:05:28 functional-196950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:05:28 functional-196950 kubelet[23159]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:05:28 functional-196950 kubelet[23159]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:05:28 functional-196950 kubelet[23159]: E1206 11:05:28.268696   23159 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:05:28 functional-196950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:05:28 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:05:28 functional-196950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1949.
	Dec 06 11:05:28 functional-196950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:05:28 functional-196950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:05:28 functional-196950 kubelet[23178]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:05:28 functional-196950 kubelet[23178]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:05:28 functional-196950 kubelet[23178]: E1206 11:05:28.990397   23178 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:05:28 functional-196950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:05:28 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:05:29 functional-196950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1950.
	Dec 06 11:05:29 functional-196950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:05:29 functional-196950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:05:29 functional-196950 kubelet[23249]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:05:29 functional-196950 kubelet[23249]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:05:29 functional-196950 kubelet[23249]: E1206 11:05:29.783448   23249 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:05:29 functional-196950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:05:29 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-196950 -n functional-196950
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-196950 -n functional-196950: exit status 2 (370.384682ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-196950" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.7s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1206 11:04:05.165525  364855 retry.go:31] will retry after 2.188425489s: Temporary Error: Get "http://10.103.244.119": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1206 11:04:17.354889  364855 retry.go:31] will retry after 6.579486383s: Temporary Error: Get "http://10.103.244.119": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1206 11:04:33.935450  364855 retry.go:31] will retry after 5.026868495s: Temporary Error: Get "http://10.103.244.119": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1206 11:04:48.964547  364855 retry.go:31] will retry after 5.423527545s: Temporary Error: Get "http://10.103.244.119": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1206 11:05:04.389839  364855 retry.go:31] will retry after 13.709861671s: Temporary Error: Get "http://10.103.244.119": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1206 11:06:44.887359  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:337: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-196950 -n functional-196950
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-196950 -n functional-196950: exit status 2 (307.837294ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-196950" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-196950
helpers_test.go:243: (dbg) docker inspect functional-196950:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1",
	        "Created": "2025-12-06T10:36:45.201779678Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 393848,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T10:36:45.318229053Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/hostname",
	        "HostsPath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/hosts",
	        "LogPath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1-json.log",
	        "Name": "/functional-196950",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-196950:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-196950",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1",
	                "LowerDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1-init/diff:/var/lib/docker/overlay2/5011226d55616c9977b14c1fe617d1302fe59373df05ce8ec6e21b79143a1c57/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-196950",
	                "Source": "/var/lib/docker/volumes/functional-196950/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-196950",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-196950",
	                "name.minikube.sigs.k8s.io": "functional-196950",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9b8f961d55d7529aed7b841f2ac9f818c22ff12b8ad73f2d6bcee22656d9749a",
	            "SandboxKey": "/var/run/docker/netns/9b8f961d55d7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33158"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-196950": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:c1:40:2a:93:47",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a566bfdfd33a868cf61e5b18b36cbd55e9868f24cbb091e055ae606aeb8c6f03",
	                    "EndpointID": "452fe32bde0c42c4c35d700488ae93aeecc6c6a971ac6f1a8a492dbc4b328ed9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-196950",
	                        "d150aac7296d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-196950 -n functional-196950
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-196950 -n functional-196950: exit status 2 (341.504593ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 logs -n 25
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                     ARGS                                                                      │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-196950 ssh findmnt -T /mount-9p | grep 9p                                                                                          │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │ 06 Dec 25 11:05 UTC │
	│ ssh            │ functional-196950 ssh -- ls -la /mount-9p                                                                                                     │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │ 06 Dec 25 11:05 UTC │
	│ ssh            │ functional-196950 ssh sudo umount -f /mount-9p                                                                                                │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │                     │
	│ mount          │ -p functional-196950 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo571722621/001:/mount2 --alsologtostderr -v=1           │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │                     │
	│ mount          │ -p functional-196950 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo571722621/001:/mount3 --alsologtostderr -v=1           │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │                     │
	│ mount          │ -p functional-196950 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo571722621/001:/mount1 --alsologtostderr -v=1           │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │                     │
	│ ssh            │ functional-196950 ssh findmnt -T /mount1                                                                                                      │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │                     │
	│ ssh            │ functional-196950 ssh findmnt -T /mount1                                                                                                      │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │ 06 Dec 25 11:05 UTC │
	│ ssh            │ functional-196950 ssh findmnt -T /mount2                                                                                                      │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │ 06 Dec 25 11:05 UTC │
	│ ssh            │ functional-196950 ssh findmnt -T /mount3                                                                                                      │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │ 06 Dec 25 11:05 UTC │
	│ mount          │ -p functional-196950 --kill=true                                                                                                              │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │                     │
	│ start          │ -p functional-196950 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │                     │
	│ start          │ -p functional-196950 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0           │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │                     │
	│ start          │ -p functional-196950 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-196950 --alsologtostderr -v=1                                                                                │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │                     │
	│ update-context │ functional-196950 update-context --alsologtostderr -v=2                                                                                       │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │ 06 Dec 25 11:05 UTC │
	│ update-context │ functional-196950 update-context --alsologtostderr -v=2                                                                                       │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │ 06 Dec 25 11:05 UTC │
	│ update-context │ functional-196950 update-context --alsologtostderr -v=2                                                                                       │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │ 06 Dec 25 11:05 UTC │
	│ image          │ functional-196950 image ls --format short --alsologtostderr                                                                                   │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │ 06 Dec 25 11:05 UTC │
	│ image          │ functional-196950 image ls --format yaml --alsologtostderr                                                                                    │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │ 06 Dec 25 11:05 UTC │
	│ ssh            │ functional-196950 ssh pgrep buildkitd                                                                                                         │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │                     │
	│ image          │ functional-196950 image build -t localhost/my-image:functional-196950 testdata/build --alsologtostderr                                        │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │ 06 Dec 25 11:05 UTC │
	│ image          │ functional-196950 image ls                                                                                                                    │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │ 06 Dec 25 11:05 UTC │
	│ image          │ functional-196950 image ls --format json --alsologtostderr                                                                                    │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │ 06 Dec 25 11:05 UTC │
	│ image          │ functional-196950 image ls --format table --alsologtostderr                                                                                   │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:05 UTC │ 06 Dec 25 11:05 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 11:05:41
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 11:05:41.957197  423975 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:05:41.957372  423975 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:05:41.957399  423975 out.go:374] Setting ErrFile to fd 2...
	I1206 11:05:41.957418  423975 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:05:41.957813  423975 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 11:05:41.958250  423975 out.go:368] Setting JSON to false
	I1206 11:05:41.959202  423975 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10093,"bootTime":1765009049,"procs":164,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 11:05:41.959273  423975 start.go:143] virtualization:  
	I1206 11:05:41.962659  423975 out.go:179] * [functional-196950] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1206 11:05:41.965760  423975 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 11:05:41.965811  423975 notify.go:221] Checking for updates...
	I1206 11:05:41.971719  423975 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 11:05:41.974704  423975 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 11:05:41.977639  423975 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-362985/.minikube
	I1206 11:05:41.980611  423975 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 11:05:41.983549  423975 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 11:05:41.986961  423975 config.go:182] Loaded profile config "functional-196950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 11:05:41.987685  423975 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 11:05:42.035258  423975 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 11:05:42.035460  423975 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:05:42.103002  423975 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:05:42.09106596 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:05:42.103130  423975 docker.go:319] overlay module found
	I1206 11:05:42.106615  423975 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1206 11:05:42.118016  423975 start.go:309] selected driver: docker
	I1206 11:05:42.118069  423975 start.go:927] validating driver "docker" against &{Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:05:42.118200  423975 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 11:05:42.128616  423975 out.go:203] 
	W1206 11:05:42.139461  423975 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1206 11:05:42.143337  423975 out.go:203] 
	
	
	==> CRI-O <==
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.084499828Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=f0fd0946-b323-435f-946c-e412850eb9c0 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.085495997Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=34b6bb47-44a4-4780-9567-04c497973fa7 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.08608111Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=e26253f3-5094-4fbe-b6d1-306f2e31fa9a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.086661177Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=f8a4ffa3-a2d3-4c05-ba97-fd167ad1ff4e name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.087187984Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=43819321-fbd2-4155-9f0b-c716c27fc9ce name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.087957288Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=aeb2085e-c4e7-4d42-9049-5042f515cbdb name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.088481658Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=0f01a640-f6a6-41dd-afc3-f5cae208f89a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.079725515Z" level=info msg="Checking image status: kicbase/echo-server:functional-196950" id=f87ee22d-4408-46c2-8930-0ab8ba7ffa52 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.079908966Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.079954112Z" level=info msg="Image kicbase/echo-server:functional-196950 not found" id=f87ee22d-4408-46c2-8930-0ab8ba7ffa52 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.080016438Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-196950 found" id=f87ee22d-4408-46c2-8930-0ab8ba7ffa52 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.114067912Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-196950" id=da9dda5e-17fc-43e5-a93c-9057adb4fa98 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.114216122Z" level=info msg="Image docker.io/kicbase/echo-server:functional-196950 not found" id=da9dda5e-17fc-43e5-a93c-9057adb4fa98 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.114253587Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-196950 found" id=da9dda5e-17fc-43e5-a93c-9057adb4fa98 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.143186327Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-196950" id=629f3d91-fd66-4652-88b6-cbe010464984 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.143320859Z" level=info msg="Image localhost/kicbase/echo-server:functional-196950 not found" id=629f3d91-fd66-4652-88b6-cbe010464984 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.143363608Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-196950 found" id=629f3d91-fd66-4652-88b6-cbe010464984 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:52 functional-196950 crio[9931]: time="2025-12-06T11:03:52.005475943Z" level=info msg="Checking image status: kicbase/echo-server:functional-196950" id=c24c3057-b5d3-4f9d-ac3b-5bbc48ff2411 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:52 functional-196950 crio[9931]: time="2025-12-06T11:03:52.005746649Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 06 11:03:52 functional-196950 crio[9931]: time="2025-12-06T11:03:52.005815245Z" level=info msg="Image kicbase/echo-server:functional-196950 not found" id=c24c3057-b5d3-4f9d-ac3b-5bbc48ff2411 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:52 functional-196950 crio[9931]: time="2025-12-06T11:03:52.005906667Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-196950 found" id=c24c3057-b5d3-4f9d-ac3b-5bbc48ff2411 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:52 functional-196950 crio[9931]: time="2025-12-06T11:03:52.062141238Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-196950" id=2c21deca-fd70-4153-95f4-22abbc99a70a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:52 functional-196950 crio[9931]: time="2025-12-06T11:03:52.062283549Z" level=info msg="Image docker.io/kicbase/echo-server:functional-196950 not found" id=2c21deca-fd70-4153-95f4-22abbc99a70a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:52 functional-196950 crio[9931]: time="2025-12-06T11:03:52.062324296Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-196950 found" id=2c21deca-fd70-4153-95f4-22abbc99a70a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:52 functional-196950 crio[9931]: time="2025-12-06T11:03:52.089472813Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-196950" id=a99d8ece-a491-4b6e-b578-7c4c50168ae2 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:07:58.855331   25449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:07:58.856120   25449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:07:58.857719   25449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:07:58.858045   25449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:07:58.859728   25449 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014752] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503231] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.065820] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.901896] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.944603] kauditd_printk_skb: 39 callbacks suppressed
	[Dec 6 09:04] hrtimer: interrupt took 32230230 ns
	[Dec 6 10:25] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 6 10:26] overlayfs: idmapped layers are currently not supported
	[  +0.066821] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 6 10:32] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:33] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:51] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 11:07:58 up  2:50,  0 user,  load average: 0.30, 0.35, 0.50
	Linux functional-196950 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 11:07:56 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:07:56 functional-196950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2146.
	Dec 06 11:07:56 functional-196950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:07:56 functional-196950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:07:56 functional-196950 kubelet[25324]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:07:56 functional-196950 kubelet[25324]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:07:56 functional-196950 kubelet[25324]: E1206 11:07:56.768743   25324 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:07:56 functional-196950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:07:56 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:07:57 functional-196950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2147.
	Dec 06 11:07:57 functional-196950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:07:57 functional-196950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:07:57 functional-196950 kubelet[25330]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:07:57 functional-196950 kubelet[25330]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:07:57 functional-196950 kubelet[25330]: E1206 11:07:57.517581   25330 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:07:57 functional-196950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:07:57 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:07:58 functional-196950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2148.
	Dec 06 11:07:58 functional-196950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:07:58 functional-196950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:07:58 functional-196950 kubelet[25364]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:07:58 functional-196950 kubelet[25364]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:07:58 functional-196950 kubelet[25364]: E1206 11:07:58.247295   25364 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:07:58 functional-196950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:07:58 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-196950 -n functional-196950
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-196950 -n functional-196950: exit status 2 (352.867252ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-196950" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.70s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (3.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-196950 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-196950 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (89.941959ms)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-196950 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-196950
helpers_test.go:243: (dbg) docker inspect functional-196950:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1",
	        "Created": "2025-12-06T10:36:45.201779678Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 393848,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T10:36:45.318229053Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/hostname",
	        "HostsPath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/hosts",
	        "LogPath": "/var/lib/docker/containers/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1/d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1-json.log",
	        "Name": "/functional-196950",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-196950:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-196950",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d150aac7296df08ea29025e393786b73508762060b973090a015d32710ce1ab1",
	                "LowerDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1-init/diff:/var/lib/docker/overlay2/5011226d55616c9977b14c1fe617d1302fe59373df05ce8ec6e21b79143a1c57/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f152671624b7931c866e71e80f9c83d2d6187e85d609e78ba96bf34c7becf8e1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-196950",
	                "Source": "/var/lib/docker/volumes/functional-196950/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-196950",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-196950",
	                "name.minikube.sigs.k8s.io": "functional-196950",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9b8f961d55d7529aed7b841f2ac9f818c22ff12b8ad73f2d6bcee22656d9749a",
	            "SandboxKey": "/var/run/docker/netns/9b8f961d55d7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33158"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33159"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33160"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33161"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-196950": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:c1:40:2a:93:47",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a566bfdfd33a868cf61e5b18b36cbd55e9868f24cbb091e055ae606aeb8c6f03",
	                    "EndpointID": "452fe32bde0c42c4c35d700488ae93aeecc6c6a971ac6f1a8a492dbc4b328ed9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-196950",
	                        "d150aac7296d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-196950 -n functional-196950
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-196950 -n functional-196950: exit status 2 (407.826127ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-196950 logs -n 25: (1.462532126s)
helpers_test.go:260: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-196950 ssh sudo crictl images                                                                                                                     │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ ssh     │ functional-196950 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                           │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ ssh     │ functional-196950 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                      │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │                     │
	│ cache   │ functional-196950 cache reload                                                                                                                               │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ ssh     │ functional-196950 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                      │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │ 06 Dec 25 10:51 UTC │
	│ kubectl │ functional-196950 kubectl -- --context functional-196950 get pods                                                                                            │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │                     │
	│ start   │ -p functional-196950 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                     │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 10:51 UTC │                     │
	│ cp      │ functional-196950 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                           │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │ 06 Dec 25 11:03 UTC │
	│ config  │ functional-196950 config unset cpus                                                                                                                          │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │ 06 Dec 25 11:03 UTC │
	│ config  │ functional-196950 config get cpus                                                                                                                            │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │                     │
	│ config  │ functional-196950 config set cpus 2                                                                                                                          │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │ 06 Dec 25 11:03 UTC │
	│ config  │ functional-196950 config get cpus                                                                                                                            │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │ 06 Dec 25 11:03 UTC │
	│ config  │ functional-196950 config unset cpus                                                                                                                          │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │ 06 Dec 25 11:03 UTC │
	│ ssh     │ functional-196950 ssh -n functional-196950 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │ 06 Dec 25 11:03 UTC │
	│ config  │ functional-196950 config get cpus                                                                                                                            │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │                     │
	│ license │                                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │ 06 Dec 25 11:03 UTC │
	│ cp      │ functional-196950 cp functional-196950:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp3781777450/001/cp-test.txt │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │ 06 Dec 25 11:03 UTC │
	│ ssh     │ functional-196950 ssh sudo systemctl is-active docker                                                                                                        │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │                     │
	│ ssh     │ functional-196950 ssh -n functional-196950 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │ 06 Dec 25 11:03 UTC │
	│ ssh     │ functional-196950 ssh sudo systemctl is-active containerd                                                                                                    │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │                     │
	│ cp      │ functional-196950 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                    │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │ 06 Dec 25 11:03 UTC │
	│ ssh     │ functional-196950 ssh -n functional-196950 sudo cat /tmp/does/not/exist/cp-test.txt                                                                          │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │ 06 Dec 25 11:03 UTC │
	│ image   │ functional-196950 image load --daemon kicbase/echo-server:functional-196950 --alsologtostderr                                                                │ functional-196950 │ jenkins │ v1.37.0 │ 06 Dec 25 11:03 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:51:25
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:51:25.658528  405191 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:51:25.659862  405191 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:51:25.659873  405191 out.go:374] Setting ErrFile to fd 2...
	I1206 10:51:25.659879  405191 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:51:25.660272  405191 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 10:51:25.660784  405191 out.go:368] Setting JSON to false
	I1206 10:51:25.661671  405191 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9237,"bootTime":1765009049,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 10:51:25.661825  405191 start.go:143] virtualization:  
	I1206 10:51:25.665170  405191 out.go:179] * [functional-196950] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:51:25.668974  405191 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 10:51:25.669057  405191 notify.go:221] Checking for updates...
	I1206 10:51:25.674658  405191 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:51:25.677504  405191 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 10:51:25.680242  405191 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-362985/.minikube
	I1206 10:51:25.683061  405191 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:51:25.685807  405191 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:51:25.689056  405191 config.go:182] Loaded profile config "functional-196950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:51:25.689150  405191 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:51:25.719603  405191 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:51:25.719706  405191 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:51:25.776170  405191 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-06 10:51:25.766414658 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:51:25.776279  405191 docker.go:319] overlay module found
	I1206 10:51:25.779319  405191 out.go:179] * Using the docker driver based on existing profile
	I1206 10:51:25.782157  405191 start.go:309] selected driver: docker
	I1206 10:51:25.782168  405191 start.go:927] validating driver "docker" against &{Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:51:25.782268  405191 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:51:25.782379  405191 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:51:25.843232  405191 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-06 10:51:25.834027648 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:51:25.843742  405191 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 10:51:25.843762  405191 cni.go:84] Creating CNI manager for ""
	I1206 10:51:25.843817  405191 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:51:25.843868  405191 start.go:353] cluster config:
	{Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:51:25.846980  405191 out.go:179] * Starting "functional-196950" primary control-plane node in "functional-196950" cluster
	I1206 10:51:25.849840  405191 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 10:51:25.852721  405191 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:51:25.855512  405191 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 10:51:25.855549  405191 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1206 10:51:25.855557  405191 cache.go:65] Caching tarball of preloaded images
	I1206 10:51:25.855585  405191 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:51:25.855649  405191 preload.go:238] Found /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1206 10:51:25.855670  405191 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1206 10:51:25.855775  405191 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/config.json ...
	I1206 10:51:25.875281  405191 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 10:51:25.875292  405191 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 10:51:25.875312  405191 cache.go:243] Successfully downloaded all kic artifacts
	I1206 10:51:25.875342  405191 start.go:360] acquireMachinesLock for functional-196950: {Name:mkd2471f275d1d2a438cb4ce89f1d1521a0fb340 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 10:51:25.875462  405191 start.go:364] duration metric: took 100.145µs to acquireMachinesLock for "functional-196950"
	I1206 10:51:25.875483  405191 start.go:96] Skipping create...Using existing machine configuration
	I1206 10:51:25.875487  405191 fix.go:54] fixHost starting: 
	I1206 10:51:25.875763  405191 cli_runner.go:164] Run: docker container inspect functional-196950 --format={{.State.Status}}
	I1206 10:51:25.893454  405191 fix.go:112] recreateIfNeeded on functional-196950: state=Running err=<nil>
	W1206 10:51:25.893482  405191 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 10:51:25.896578  405191 out.go:252] * Updating the running docker "functional-196950" container ...
	I1206 10:51:25.896608  405191 machine.go:94] provisionDockerMachine start ...
	I1206 10:51:25.896697  405191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:51:25.913940  405191 main.go:143] libmachine: Using SSH client type: native
	I1206 10:51:25.914320  405191 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1206 10:51:25.914327  405191 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 10:51:26.075155  405191 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-196950
	
	I1206 10:51:26.075169  405191 ubuntu.go:182] provisioning hostname "functional-196950"
	I1206 10:51:26.075252  405191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:51:26.094744  405191 main.go:143] libmachine: Using SSH client type: native
	I1206 10:51:26.095070  405191 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1206 10:51:26.095080  405191 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-196950 && echo "functional-196950" | sudo tee /etc/hostname
	I1206 10:51:26.261114  405191 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-196950
	
	I1206 10:51:26.261197  405191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:51:26.279848  405191 main.go:143] libmachine: Using SSH client type: native
	I1206 10:51:26.280166  405191 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1206 10:51:26.280180  405191 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-196950' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-196950/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-196950' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 10:51:26.431933  405191 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 10:51:26.431953  405191 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-362985/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-362985/.minikube}
	I1206 10:51:26.431971  405191 ubuntu.go:190] setting up certificates
	I1206 10:51:26.431995  405191 provision.go:84] configureAuth start
	I1206 10:51:26.432056  405191 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-196950
	I1206 10:51:26.450343  405191 provision.go:143] copyHostCerts
	I1206 10:51:26.450415  405191 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem, removing ...
	I1206 10:51:26.450432  405191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem
	I1206 10:51:26.450505  405191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem (1082 bytes)
	I1206 10:51:26.450607  405191 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem, removing ...
	I1206 10:51:26.450611  405191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem
	I1206 10:51:26.450636  405191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem (1123 bytes)
	I1206 10:51:26.450689  405191 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem, removing ...
	I1206 10:51:26.450693  405191 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem
	I1206 10:51:26.450714  405191 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem (1679 bytes)
	I1206 10:51:26.450755  405191 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem org=jenkins.functional-196950 san=[127.0.0.1 192.168.49.2 functional-196950 localhost minikube]
	I1206 10:51:26.540911  405191 provision.go:177] copyRemoteCerts
	I1206 10:51:26.540967  405191 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 10:51:26.541011  405191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:51:26.559000  405191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:51:26.664415  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 10:51:26.682850  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1206 10:51:26.700635  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 10:51:26.720260  405191 provision.go:87] duration metric: took 288.251554ms to configureAuth
	I1206 10:51:26.720277  405191 ubuntu.go:206] setting minikube options for container-runtime
	I1206 10:51:26.720482  405191 config.go:182] Loaded profile config "functional-196950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 10:51:26.720577  405191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:51:26.740294  405191 main.go:143] libmachine: Using SSH client type: native
	I1206 10:51:26.740607  405191 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33158 <nil> <nil>}
	I1206 10:51:26.740618  405191 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 10:51:27.107160  405191 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 10:51:27.107175  405191 machine.go:97] duration metric: took 1.210560762s to provisionDockerMachine
	I1206 10:51:27.107185  405191 start.go:293] postStartSetup for "functional-196950" (driver="docker")
	I1206 10:51:27.107196  405191 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 10:51:27.107253  405191 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 10:51:27.107294  405191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:51:27.129039  405191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:51:27.236148  405191 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 10:51:27.240016  405191 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 10:51:27.240036  405191 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 10:51:27.240047  405191 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-362985/.minikube/addons for local assets ...
	I1206 10:51:27.240125  405191 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-362985/.minikube/files for local assets ...
	I1206 10:51:27.240216  405191 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem -> 3648552.pem in /etc/ssl/certs
	I1206 10:51:27.240311  405191 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/test/nested/copy/364855/hosts -> hosts in /etc/test/nested/copy/364855
	I1206 10:51:27.240389  405191 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/364855
	I1206 10:51:27.248525  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem --> /etc/ssl/certs/3648552.pem (1708 bytes)
	I1206 10:51:27.267246  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/test/nested/copy/364855/hosts --> /etc/test/nested/copy/364855/hosts (40 bytes)
	I1206 10:51:27.285080  405191 start.go:296] duration metric: took 177.880099ms for postStartSetup
	I1206 10:51:27.285152  405191 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 10:51:27.285189  405191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:51:27.302563  405191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:51:27.404400  405191 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 10:51:27.408968  405191 fix.go:56] duration metric: took 1.533473357s for fixHost
	I1206 10:51:27.408984  405191 start.go:83] releasing machines lock for "functional-196950", held for 1.533513702s
	I1206 10:51:27.409052  405191 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-196950
	I1206 10:51:27.427444  405191 ssh_runner.go:195] Run: cat /version.json
	I1206 10:51:27.427475  405191 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 10:51:27.427488  405191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:51:27.427532  405191 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
	I1206 10:51:27.449136  405191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:51:27.450292  405191 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
	I1206 10:51:27.555364  405191 ssh_runner.go:195] Run: systemctl --version
	I1206 10:51:27.645936  405191 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 10:51:27.683240  405191 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 10:51:27.687562  405191 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 10:51:27.687626  405191 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 10:51:27.695460  405191 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 10:51:27.695474  405191 start.go:496] detecting cgroup driver to use...
	I1206 10:51:27.695505  405191 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 10:51:27.695551  405191 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 10:51:27.711018  405191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 10:51:27.724651  405191 docker.go:218] disabling cri-docker service (if available) ...
	I1206 10:51:27.724707  405191 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 10:51:27.740806  405191 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 10:51:27.754100  405191 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 10:51:27.883046  405191 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 10:51:28.013378  405191 docker.go:234] disabling docker service ...
	I1206 10:51:28.013440  405191 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 10:51:28.030310  405191 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 10:51:28.044424  405191 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 10:51:28.162200  405191 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 10:51:28.315775  405191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 10:51:28.333888  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 10:51:28.350625  405191 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 10:51:28.350700  405191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:51:28.360184  405191 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 10:51:28.360243  405191 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:51:28.369224  405191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:51:28.378656  405191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:51:28.387862  405191 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 10:51:28.396244  405191 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:51:28.405446  405191 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:51:28.414057  405191 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 10:51:28.423226  405191 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 10:51:28.430865  405191 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 10:51:28.438644  405191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:51:28.553737  405191 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 10:51:28.722710  405191 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 10:51:28.722782  405191 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 10:51:28.727796  405191 start.go:564] Will wait 60s for crictl version
	I1206 10:51:28.727854  405191 ssh_runner.go:195] Run: which crictl
	I1206 10:51:28.731603  405191 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 10:51:28.757634  405191 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 10:51:28.757708  405191 ssh_runner.go:195] Run: crio --version
	I1206 10:51:28.786864  405191 ssh_runner.go:195] Run: crio --version
	I1206 10:51:28.819624  405191 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1206 10:51:28.822438  405191 cli_runner.go:164] Run: docker network inspect functional-196950 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 10:51:28.838919  405191 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1206 10:51:28.845850  405191 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1206 10:51:28.848840  405191 kubeadm.go:884] updating cluster {Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 10:51:28.848980  405191 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 10:51:28.849059  405191 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:51:28.884770  405191 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:51:28.884782  405191 crio.go:433] Images already preloaded, skipping extraction
	I1206 10:51:28.884839  405191 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 10:51:28.911560  405191 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 10:51:28.911574  405191 cache_images.go:86] Images are preloaded, skipping loading
	I1206 10:51:28.911581  405191 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1206 10:51:28.911685  405191 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-196950 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 10:51:28.911771  405191 ssh_runner.go:195] Run: crio config
	I1206 10:51:28.966566  405191 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1206 10:51:28.966595  405191 cni.go:84] Creating CNI manager for ""
	I1206 10:51:28.966604  405191 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:51:28.966619  405191 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 10:51:28.966641  405191 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-196950 NodeName:functional-196950 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 10:51:28.966791  405191 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-196950"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 10:51:28.966870  405191 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 10:51:28.978798  405191 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 10:51:28.978872  405191 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 10:51:28.987304  405191 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1206 10:51:29.001847  405191 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 10:51:29.017577  405191 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1206 10:51:29.031751  405191 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1206 10:51:29.036513  405191 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 10:51:29.155805  405191 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 10:51:29.722153  405191 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950 for IP: 192.168.49.2
	I1206 10:51:29.722163  405191 certs.go:195] generating shared ca certs ...
	I1206 10:51:29.722178  405191 certs.go:227] acquiring lock for ca certs: {Name:mke2ec61a37b6f3abbcbeb9abd23d6a19d011dd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:51:29.722312  405191 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.key
	I1206 10:51:29.722350  405191 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.key
	I1206 10:51:29.722357  405191 certs.go:257] generating profile certs ...
	I1206 10:51:29.722458  405191 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.key
	I1206 10:51:29.722506  405191 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.key.a77b39a6
	I1206 10:51:29.722550  405191 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.key
	I1206 10:51:29.722659  405191 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855.pem (1338 bytes)
	W1206 10:51:29.722686  405191 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855_empty.pem, impossibly tiny 0 bytes
	I1206 10:51:29.722693  405191 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 10:51:29.722721  405191 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem (1082 bytes)
	I1206 10:51:29.722747  405191 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem (1123 bytes)
	I1206 10:51:29.722776  405191 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem (1679 bytes)
	I1206 10:51:29.722816  405191 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem (1708 bytes)
	I1206 10:51:29.723422  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 10:51:29.745118  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 10:51:29.764772  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 10:51:29.783979  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 10:51:29.803249  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1206 10:51:29.821820  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 10:51:29.840052  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 10:51:29.858172  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 10:51:29.876447  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 10:51:29.894619  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855.pem --> /usr/share/ca-certificates/364855.pem (1338 bytes)
	I1206 10:51:29.912710  405191 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem --> /usr/share/ca-certificates/3648552.pem (1708 bytes)
	I1206 10:51:29.930993  405191 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 10:51:29.944776  405191 ssh_runner.go:195] Run: openssl version
	I1206 10:51:29.951232  405191 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:51:29.958913  405191 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 10:51:29.966922  405191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:51:29.970672  405191 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:26 /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:51:29.970730  405191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 10:51:30.016305  405191 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 10:51:30.031889  405191 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364855.pem
	I1206 10:51:30.048455  405191 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364855.pem /etc/ssl/certs/364855.pem
	I1206 10:51:30.063564  405191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364855.pem
	I1206 10:51:30.076207  405191 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:36 /usr/share/ca-certificates/364855.pem
	I1206 10:51:30.076271  405191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364855.pem
	I1206 10:51:30.128156  405191 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 10:51:30.136853  405191 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648552.pem
	I1206 10:51:30.146061  405191 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648552.pem /etc/ssl/certs/3648552.pem
	I1206 10:51:30.154785  405191 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648552.pem
	I1206 10:51:30.159209  405191 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:36 /usr/share/ca-certificates/3648552.pem
	I1206 10:51:30.159296  405191 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648552.pem
	I1206 10:51:30.202450  405191 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 10:51:30.210421  405191 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 10:51:30.214689  405191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 10:51:30.257294  405191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 10:51:30.301161  405191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 10:51:30.342552  405191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 10:51:30.384443  405191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 10:51:30.426153  405191 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 10:51:30.467193  405191 kubeadm.go:401] StartCluster: {Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:51:30.467269  405191 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 10:51:30.467336  405191 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:51:30.505294  405191 cri.go:89] found id: ""
	I1206 10:51:30.505356  405191 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 10:51:30.514317  405191 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 10:51:30.514327  405191 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 10:51:30.514378  405191 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 10:51:30.522953  405191 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:51:30.523619  405191 kubeconfig.go:125] found "functional-196950" server: "https://192.168.49.2:8441"
	I1206 10:51:30.525284  405191 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 10:51:30.535655  405191 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-06 10:36:53.608460602 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-06 10:51:29.025529796 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1206 10:51:30.535667  405191 kubeadm.go:1161] stopping kube-system containers ...
	I1206 10:51:30.535679  405191 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1206 10:51:30.535750  405191 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 10:51:30.563289  405191 cri.go:89] found id: ""
	I1206 10:51:30.563367  405191 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 10:51:30.577669  405191 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:51:30.585599  405191 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Dec  6 10:40 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec  6 10:40 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec  6 10:40 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec  6 10:40 /etc/kubernetes/scheduler.conf
	
	I1206 10:51:30.585661  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:51:30.593607  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:51:30.601561  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:51:30.601615  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:51:30.609082  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:51:30.616706  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:51:30.616764  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:51:30.624576  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:51:30.632333  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 10:51:30.632396  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:51:30.640022  405191 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 10:51:30.648015  405191 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:51:30.694279  405191 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:51:31.789747  405191 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.095443049s)
	I1206 10:51:31.789807  405191 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:51:31.992373  405191 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:51:32.066243  405191 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 10:51:32.115098  405191 api_server.go:52] waiting for apiserver process to appear ...
	I1206 10:51:32.115193  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:32.616025  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:33.115328  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:33.615777  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:34.116234  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:34.616203  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:35.115628  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:35.616081  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:36.116020  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:36.616269  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:37.115484  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:37.615419  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:38.115405  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:38.615272  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:39.115398  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:39.615355  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:40.115498  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:40.615726  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:41.116068  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:41.615318  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:42.116188  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:42.615408  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:43.116174  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:43.616150  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:44.115863  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:44.616112  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:45.115433  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:45.615358  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:46.115254  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:46.615554  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:47.116219  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:47.615907  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:48.115484  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:48.615750  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:49.115717  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:49.615630  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:50.115975  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:50.615777  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:51.116004  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:51.615732  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:52.115255  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:52.616222  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:53.115944  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:53.616128  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:54.115370  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:54.616204  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:55.116093  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:55.616070  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:56.116312  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:56.616205  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:57.116056  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:57.616042  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:58.116102  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:58.616065  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:59.115989  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:51:59.615683  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:00.115492  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:00.616604  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:01.115972  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:01.615689  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:02.116000  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:02.615289  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:03.116299  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:03.615451  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:04.115353  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:04.615302  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:05.115836  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:05.616105  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:06.115987  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:06.615950  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:07.116145  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:07.615538  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:08.115408  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:08.616204  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:09.116054  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:09.615547  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:10.115395  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:10.616209  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:11.115978  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:11.616260  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:12.115320  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:12.616287  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:13.115459  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:13.615480  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:14.116026  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:14.615286  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:15.116132  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:15.615307  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:16.116269  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:16.616317  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:17.115290  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:17.615531  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:18.115402  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:18.615434  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:19.115328  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:19.615503  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:20.115398  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:20.616128  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:21.115363  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:21.615736  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:22.115418  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:22.616278  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:23.115418  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:23.616297  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:24.115428  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:24.615397  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:25.115674  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:25.615431  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:26.116295  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:26.615282  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:27.115737  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:27.615537  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:28.115556  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:28.615304  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:29.115439  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:29.615331  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:30.116125  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:30.615932  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:31.115423  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:31.616201  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:32.116069  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:32.116145  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:32.141390  405191 cri.go:89] found id: ""
	I1206 10:52:32.141404  405191 logs.go:282] 0 containers: []
	W1206 10:52:32.141411  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:32.141416  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:32.141473  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:32.166484  405191 cri.go:89] found id: ""
	I1206 10:52:32.166497  405191 logs.go:282] 0 containers: []
	W1206 10:52:32.166504  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:32.166509  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:32.166565  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:32.194996  405191 cri.go:89] found id: ""
	I1206 10:52:32.195009  405191 logs.go:282] 0 containers: []
	W1206 10:52:32.195016  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:32.195021  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:32.195076  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:32.221300  405191 cri.go:89] found id: ""
	I1206 10:52:32.221313  405191 logs.go:282] 0 containers: []
	W1206 10:52:32.221321  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:32.221326  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:32.221382  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:32.247157  405191 cri.go:89] found id: ""
	I1206 10:52:32.247171  405191 logs.go:282] 0 containers: []
	W1206 10:52:32.247178  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:32.247201  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:32.247261  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:32.272996  405191 cri.go:89] found id: ""
	I1206 10:52:32.273011  405191 logs.go:282] 0 containers: []
	W1206 10:52:32.273018  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:32.273023  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:32.273087  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:32.298872  405191 cri.go:89] found id: ""
	I1206 10:52:32.298885  405191 logs.go:282] 0 containers: []
	W1206 10:52:32.298892  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:32.298899  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:32.298909  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:32.365036  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:32.365056  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:32.380152  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:32.380168  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:32.448480  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:32.439513   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:32.440191   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:32.441917   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:32.442441   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:32.444184   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:32.439513   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:32.440191   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:32.441917   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:32.442441   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:32.444184   11005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:32.448508  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:32.448519  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:32.521363  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:32.521385  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:52:35.051557  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:35.061829  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:35.061887  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:35.090093  405191 cri.go:89] found id: ""
	I1206 10:52:35.090109  405191 logs.go:282] 0 containers: []
	W1206 10:52:35.090116  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:35.090123  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:35.090185  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:35.120692  405191 cri.go:89] found id: ""
	I1206 10:52:35.120706  405191 logs.go:282] 0 containers: []
	W1206 10:52:35.120713  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:35.120718  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:35.120781  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:35.150871  405191 cri.go:89] found id: ""
	I1206 10:52:35.150885  405191 logs.go:282] 0 containers: []
	W1206 10:52:35.150895  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:35.150901  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:35.150966  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:35.178176  405191 cri.go:89] found id: ""
	I1206 10:52:35.178189  405191 logs.go:282] 0 containers: []
	W1206 10:52:35.178196  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:35.178201  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:35.178259  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:35.203836  405191 cri.go:89] found id: ""
	I1206 10:52:35.203851  405191 logs.go:282] 0 containers: []
	W1206 10:52:35.203858  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:35.203864  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:35.203922  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:35.229838  405191 cri.go:89] found id: ""
	I1206 10:52:35.229852  405191 logs.go:282] 0 containers: []
	W1206 10:52:35.229860  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:35.229865  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:35.229923  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:35.255728  405191 cri.go:89] found id: ""
	I1206 10:52:35.255742  405191 logs.go:282] 0 containers: []
	W1206 10:52:35.255749  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:35.255763  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:35.255774  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:35.326293  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:35.326313  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:35.341587  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:35.341603  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:35.406128  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:35.396962   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:35.397407   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:35.399334   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:35.399842   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:35.401729   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:35.396962   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:35.397407   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:35.399334   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:35.399842   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:35.401729   11109 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:35.406138  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:35.406148  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:35.477539  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:35.477561  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:52:38.012461  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:38.026662  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:38.026746  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:38.057501  405191 cri.go:89] found id: ""
	I1206 10:52:38.057514  405191 logs.go:282] 0 containers: []
	W1206 10:52:38.057522  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:38.057527  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:38.057597  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:38.087721  405191 cri.go:89] found id: ""
	I1206 10:52:38.087736  405191 logs.go:282] 0 containers: []
	W1206 10:52:38.087744  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:38.087750  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:38.087812  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:38.115539  405191 cri.go:89] found id: ""
	I1206 10:52:38.115553  405191 logs.go:282] 0 containers: []
	W1206 10:52:38.115560  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:38.115566  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:38.115624  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:38.140812  405191 cri.go:89] found id: ""
	I1206 10:52:38.140826  405191 logs.go:282] 0 containers: []
	W1206 10:52:38.140833  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:38.140838  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:38.140896  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:38.166576  405191 cri.go:89] found id: ""
	I1206 10:52:38.166590  405191 logs.go:282] 0 containers: []
	W1206 10:52:38.166597  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:38.166602  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:38.166662  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:38.191851  405191 cri.go:89] found id: ""
	I1206 10:52:38.191864  405191 logs.go:282] 0 containers: []
	W1206 10:52:38.191871  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:38.191876  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:38.191933  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:38.217461  405191 cri.go:89] found id: ""
	I1206 10:52:38.217475  405191 logs.go:282] 0 containers: []
	W1206 10:52:38.217482  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:38.217490  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:38.217502  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:38.232449  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:38.232465  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:38.295220  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:38.286931   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:38.287615   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:38.289268   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:38.289707   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:38.291283   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:38.286931   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:38.287615   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:38.289268   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:38.289707   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:38.291283   11213 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:38.295242  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:38.295255  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:38.363789  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:38.363809  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:52:38.393298  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:38.393313  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:40.963508  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:40.975400  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:40.975471  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:41.012381  405191 cri.go:89] found id: ""
	I1206 10:52:41.012396  405191 logs.go:282] 0 containers: []
	W1206 10:52:41.012403  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:41.012409  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:41.012481  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:41.045820  405191 cri.go:89] found id: ""
	I1206 10:52:41.045833  405191 logs.go:282] 0 containers: []
	W1206 10:52:41.045840  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:41.045845  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:41.045905  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:41.072220  405191 cri.go:89] found id: ""
	I1206 10:52:41.072234  405191 logs.go:282] 0 containers: []
	W1206 10:52:41.072241  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:41.072246  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:41.072315  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:41.099263  405191 cri.go:89] found id: ""
	I1206 10:52:41.099289  405191 logs.go:282] 0 containers: []
	W1206 10:52:41.099297  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:41.099302  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:41.099400  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:41.125321  405191 cri.go:89] found id: ""
	I1206 10:52:41.125335  405191 logs.go:282] 0 containers: []
	W1206 10:52:41.125342  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:41.125347  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:41.125407  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:41.151976  405191 cri.go:89] found id: ""
	I1206 10:52:41.151991  405191 logs.go:282] 0 containers: []
	W1206 10:52:41.151998  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:41.152004  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:41.152071  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:41.182220  405191 cri.go:89] found id: ""
	I1206 10:52:41.182246  405191 logs.go:282] 0 containers: []
	W1206 10:52:41.182254  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:41.182262  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:41.182276  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:41.248526  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:41.239066   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:41.239904   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:41.241505   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:41.242002   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:41.243768   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:41.239066   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:41.239904   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:41.241505   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:41.242002   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:41.243768   11315 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:41.248580  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:41.248592  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:41.318224  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:41.318245  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:52:41.351350  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:41.351366  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:41.419147  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:41.419175  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:43.934479  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:43.945219  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:43.945319  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:43.977434  405191 cri.go:89] found id: ""
	I1206 10:52:43.977447  405191 logs.go:282] 0 containers: []
	W1206 10:52:43.977455  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:43.977460  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:43.977521  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:44.023455  405191 cri.go:89] found id: ""
	I1206 10:52:44.023469  405191 logs.go:282] 0 containers: []
	W1206 10:52:44.023476  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:44.023481  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:44.023547  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:44.054515  405191 cri.go:89] found id: ""
	I1206 10:52:44.054528  405191 logs.go:282] 0 containers: []
	W1206 10:52:44.054535  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:44.054542  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:44.054606  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:44.081078  405191 cri.go:89] found id: ""
	I1206 10:52:44.081092  405191 logs.go:282] 0 containers: []
	W1206 10:52:44.081100  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:44.081105  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:44.081169  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:44.107423  405191 cri.go:89] found id: ""
	I1206 10:52:44.107437  405191 logs.go:282] 0 containers: []
	W1206 10:52:44.107451  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:44.107456  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:44.107514  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:44.134813  405191 cri.go:89] found id: ""
	I1206 10:52:44.134827  405191 logs.go:282] 0 containers: []
	W1206 10:52:44.134834  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:44.134839  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:44.134901  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:44.160796  405191 cri.go:89] found id: ""
	I1206 10:52:44.160816  405191 logs.go:282] 0 containers: []
	W1206 10:52:44.160824  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:44.160831  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:44.160842  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:52:44.190778  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:44.190796  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:44.257562  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:44.257581  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:44.272647  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:44.272663  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:44.338023  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:44.329392   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:44.330156   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:44.331823   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:44.332332   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:44.333956   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:44.329392   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:44.330156   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:44.331823   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:44.332332   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:44.333956   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:44.338033  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:44.338043  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:46.906964  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:46.917503  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:46.917559  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:46.949168  405191 cri.go:89] found id: ""
	I1206 10:52:46.949182  405191 logs.go:282] 0 containers: []
	W1206 10:52:46.949189  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:46.949194  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:46.949253  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:46.981111  405191 cri.go:89] found id: ""
	I1206 10:52:46.981124  405191 logs.go:282] 0 containers: []
	W1206 10:52:46.981131  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:46.981136  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:46.981196  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:47.022951  405191 cri.go:89] found id: ""
	I1206 10:52:47.022965  405191 logs.go:282] 0 containers: []
	W1206 10:52:47.022972  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:47.022977  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:47.023037  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:47.052856  405191 cri.go:89] found id: ""
	I1206 10:52:47.052870  405191 logs.go:282] 0 containers: []
	W1206 10:52:47.052886  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:47.052891  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:47.052967  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:47.083787  405191 cri.go:89] found id: ""
	I1206 10:52:47.083800  405191 logs.go:282] 0 containers: []
	W1206 10:52:47.083807  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:47.083813  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:47.083870  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:47.109033  405191 cri.go:89] found id: ""
	I1206 10:52:47.109046  405191 logs.go:282] 0 containers: []
	W1206 10:52:47.109054  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:47.109059  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:47.109115  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:47.139758  405191 cri.go:89] found id: ""
	I1206 10:52:47.139772  405191 logs.go:282] 0 containers: []
	W1206 10:52:47.139779  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:47.139788  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:47.139798  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:47.154866  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:47.154884  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:47.221813  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:47.213688   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:47.214230   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:47.215830   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:47.216327   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:47.217906   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:47.213688   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:47.214230   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:47.215830   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:47.216327   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:47.217906   11530 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:47.221824  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:47.221835  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:47.290233  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:47.290253  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:52:47.321014  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:47.321036  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:49.890726  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:49.902627  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:49.902688  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:49.929201  405191 cri.go:89] found id: ""
	I1206 10:52:49.929215  405191 logs.go:282] 0 containers: []
	W1206 10:52:49.929224  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:49.929230  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:49.929290  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:49.956185  405191 cri.go:89] found id: ""
	I1206 10:52:49.956198  405191 logs.go:282] 0 containers: []
	W1206 10:52:49.956205  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:49.956210  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:49.956269  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:49.993314  405191 cri.go:89] found id: ""
	I1206 10:52:49.993329  405191 logs.go:282] 0 containers: []
	W1206 10:52:49.993336  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:49.993343  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:49.993403  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:50.037379  405191 cri.go:89] found id: ""
	I1206 10:52:50.037395  405191 logs.go:282] 0 containers: []
	W1206 10:52:50.037403  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:50.037409  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:50.037472  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:50.067336  405191 cri.go:89] found id: ""
	I1206 10:52:50.067351  405191 logs.go:282] 0 containers: []
	W1206 10:52:50.067358  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:50.067363  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:50.067469  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:50.094997  405191 cri.go:89] found id: ""
	I1206 10:52:50.095010  405191 logs.go:282] 0 containers: []
	W1206 10:52:50.095018  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:50.095023  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:50.095087  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:50.122233  405191 cri.go:89] found id: ""
	I1206 10:52:50.122247  405191 logs.go:282] 0 containers: []
	W1206 10:52:50.122254  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:50.122262  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:50.122274  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:50.137790  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:50.137811  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:50.201020  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:50.192768   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:50.193599   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:50.195170   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:50.195719   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:50.197320   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:50.192768   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:50.193599   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:50.195170   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:50.195719   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:50.197320   11636 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:50.201031  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:50.201041  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:50.275122  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:50.275142  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:52:50.303756  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:50.303777  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:52.872285  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:52.882349  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:52.882406  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:52.911618  405191 cri.go:89] found id: ""
	I1206 10:52:52.911631  405191 logs.go:282] 0 containers: []
	W1206 10:52:52.911638  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:52.911644  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:52.911705  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:52.937062  405191 cri.go:89] found id: ""
	I1206 10:52:52.937077  405191 logs.go:282] 0 containers: []
	W1206 10:52:52.937084  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:52.937089  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:52.937149  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:52.963326  405191 cri.go:89] found id: ""
	I1206 10:52:52.963340  405191 logs.go:282] 0 containers: []
	W1206 10:52:52.963347  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:52.963352  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:52.963437  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:52.997061  405191 cri.go:89] found id: ""
	I1206 10:52:52.997074  405191 logs.go:282] 0 containers: []
	W1206 10:52:52.997081  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:52.997086  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:52.997149  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:53.035456  405191 cri.go:89] found id: ""
	I1206 10:52:53.035469  405191 logs.go:282] 0 containers: []
	W1206 10:52:53.035477  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:53.035483  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:53.035543  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:53.063687  405191 cri.go:89] found id: ""
	I1206 10:52:53.063700  405191 logs.go:282] 0 containers: []
	W1206 10:52:53.063707  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:53.063712  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:53.063770  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:53.089131  405191 cri.go:89] found id: ""
	I1206 10:52:53.089145  405191 logs.go:282] 0 containers: []
	W1206 10:52:53.089152  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:53.089161  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:53.089180  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:53.154130  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:53.145768   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:53.146202   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:53.147939   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:53.148440   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:53.150128   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:53.145768   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:53.146202   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:53.147939   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:53.148440   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:53.150128   11735 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:53.154142  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:53.154153  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:53.226211  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:53.226231  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:52:53.255876  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:53.255893  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:53.328864  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:53.328884  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:55.844855  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:55.855173  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:55.855232  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:55.882003  405191 cri.go:89] found id: ""
	I1206 10:52:55.882016  405191 logs.go:282] 0 containers: []
	W1206 10:52:55.882037  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:55.882043  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:55.882102  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:55.906679  405191 cri.go:89] found id: ""
	I1206 10:52:55.906693  405191 logs.go:282] 0 containers: []
	W1206 10:52:55.906700  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:55.906705  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:55.906763  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:55.932742  405191 cri.go:89] found id: ""
	I1206 10:52:55.932756  405191 logs.go:282] 0 containers: []
	W1206 10:52:55.932763  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:55.932769  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:55.932830  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:55.959084  405191 cri.go:89] found id: ""
	I1206 10:52:55.959097  405191 logs.go:282] 0 containers: []
	W1206 10:52:55.959104  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:55.959109  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:55.959167  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:56.001438  405191 cri.go:89] found id: ""
	I1206 10:52:56.001453  405191 logs.go:282] 0 containers: []
	W1206 10:52:56.001461  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:56.001467  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:56.001540  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:56.039276  405191 cri.go:89] found id: ""
	I1206 10:52:56.039291  405191 logs.go:282] 0 containers: []
	W1206 10:52:56.039298  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:56.039304  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:56.039368  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:56.074083  405191 cri.go:89] found id: ""
	I1206 10:52:56.074097  405191 logs.go:282] 0 containers: []
	W1206 10:52:56.074104  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:56.074112  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:56.074124  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:56.148294  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:56.148320  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:56.163720  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:56.163740  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:56.231608  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:56.222271   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:56.222910   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:56.224621   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:56.225337   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:56.227055   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:56.222271   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:56.222910   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:56.224621   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:56.225337   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:56.227055   11846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:56.231633  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:56.231644  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:56.301348  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:56.301373  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:52:58.834132  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:52:58.844214  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:52:58.844271  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:52:58.871604  405191 cri.go:89] found id: ""
	I1206 10:52:58.871618  405191 logs.go:282] 0 containers: []
	W1206 10:52:58.871625  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:52:58.871630  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:52:58.871689  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:52:58.898243  405191 cri.go:89] found id: ""
	I1206 10:52:58.898257  405191 logs.go:282] 0 containers: []
	W1206 10:52:58.898264  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:52:58.898269  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:52:58.898325  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:52:58.921887  405191 cri.go:89] found id: ""
	I1206 10:52:58.921901  405191 logs.go:282] 0 containers: []
	W1206 10:52:58.921907  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:52:58.921913  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:52:58.921970  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:52:58.947546  405191 cri.go:89] found id: ""
	I1206 10:52:58.947563  405191 logs.go:282] 0 containers: []
	W1206 10:52:58.947570  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:52:58.947575  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:52:58.947645  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:52:58.976915  405191 cri.go:89] found id: ""
	I1206 10:52:58.976930  405191 logs.go:282] 0 containers: []
	W1206 10:52:58.976937  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:52:58.976942  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:52:58.977005  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:52:59.013936  405191 cri.go:89] found id: ""
	I1206 10:52:59.013949  405191 logs.go:282] 0 containers: []
	W1206 10:52:59.013956  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:52:59.013962  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:52:59.014020  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:52:59.044670  405191 cri.go:89] found id: ""
	I1206 10:52:59.044683  405191 logs.go:282] 0 containers: []
	W1206 10:52:59.044690  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:52:59.044698  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:52:59.044708  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:52:59.111552  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:52:59.111571  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:52:59.125917  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:52:59.125933  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:52:59.190341  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:52:59.182165   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:59.182776   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:59.184355   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:59.184805   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:59.186371   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:52:59.182165   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:59.182776   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:59.184355   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:59.184805   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:52:59.186371   11953 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:52:59.190351  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:52:59.190362  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:52:59.258936  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:52:59.258957  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:01.790777  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:01.802470  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:01.802534  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:01.828331  405191 cri.go:89] found id: ""
	I1206 10:53:01.828345  405191 logs.go:282] 0 containers: []
	W1206 10:53:01.828352  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:01.828357  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:01.828415  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:01.853132  405191 cri.go:89] found id: ""
	I1206 10:53:01.853145  405191 logs.go:282] 0 containers: []
	W1206 10:53:01.853153  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:01.853158  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:01.853218  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:01.879034  405191 cri.go:89] found id: ""
	I1206 10:53:01.879048  405191 logs.go:282] 0 containers: []
	W1206 10:53:01.879055  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:01.879060  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:01.879119  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:01.905079  405191 cri.go:89] found id: ""
	I1206 10:53:01.905094  405191 logs.go:282] 0 containers: []
	W1206 10:53:01.905101  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:01.905106  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:01.905168  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:01.931029  405191 cri.go:89] found id: ""
	I1206 10:53:01.931043  405191 logs.go:282] 0 containers: []
	W1206 10:53:01.931050  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:01.931055  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:01.931115  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:01.958324  405191 cri.go:89] found id: ""
	I1206 10:53:01.958338  405191 logs.go:282] 0 containers: []
	W1206 10:53:01.958345  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:01.958351  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:01.958406  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:01.999570  405191 cri.go:89] found id: ""
	I1206 10:53:01.999583  405191 logs.go:282] 0 containers: []
	W1206 10:53:01.999590  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:01.999598  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:01.999613  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:02.075754  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:02.075775  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:02.091145  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:02.091168  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:02.166018  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:02.151211   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:02.151882   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:02.153563   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:02.154149   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:02.161209   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:02.151211   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:02.151882   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:02.153563   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:02.154149   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:02.161209   12060 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:02.166029  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:02.166041  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:02.236832  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:02.236853  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:04.769770  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:04.780155  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:04.780230  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:04.805785  405191 cri.go:89] found id: ""
	I1206 10:53:04.805799  405191 logs.go:282] 0 containers: []
	W1206 10:53:04.805806  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:04.805811  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:04.805871  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:04.833423  405191 cri.go:89] found id: ""
	I1206 10:53:04.833445  405191 logs.go:282] 0 containers: []
	W1206 10:53:04.833452  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:04.833458  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:04.833523  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:04.859864  405191 cri.go:89] found id: ""
	I1206 10:53:04.859879  405191 logs.go:282] 0 containers: []
	W1206 10:53:04.859888  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:04.859895  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:04.859964  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:04.886417  405191 cri.go:89] found id: ""
	I1206 10:53:04.886431  405191 logs.go:282] 0 containers: []
	W1206 10:53:04.886437  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:04.886443  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:04.886503  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:04.912019  405191 cri.go:89] found id: ""
	I1206 10:53:04.912033  405191 logs.go:282] 0 containers: []
	W1206 10:53:04.912040  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:04.912044  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:04.912104  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:04.941901  405191 cri.go:89] found id: ""
	I1206 10:53:04.941915  405191 logs.go:282] 0 containers: []
	W1206 10:53:04.941922  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:04.941928  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:04.941990  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:04.967316  405191 cri.go:89] found id: ""
	I1206 10:53:04.967330  405191 logs.go:282] 0 containers: []
	W1206 10:53:04.967337  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:04.967344  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:04.967356  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:05.048268  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:05.048290  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:05.064282  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:05.064299  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:05.132111  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:05.123756   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:05.124563   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:05.126211   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:05.126545   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:05.128101   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:05.123756   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:05.124563   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:05.126211   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:05.126545   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:05.128101   12166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:05.132131  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:05.132142  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:05.202438  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:05.202460  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:07.731737  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:07.742255  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:07.742344  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:07.767645  405191 cri.go:89] found id: ""
	I1206 10:53:07.767659  405191 logs.go:282] 0 containers: []
	W1206 10:53:07.767666  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:07.767671  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:07.767730  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:07.793951  405191 cri.go:89] found id: ""
	I1206 10:53:07.793975  405191 logs.go:282] 0 containers: []
	W1206 10:53:07.793983  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:07.793989  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:07.794055  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:07.819683  405191 cri.go:89] found id: ""
	I1206 10:53:07.819699  405191 logs.go:282] 0 containers: []
	W1206 10:53:07.819705  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:07.819711  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:07.819784  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:07.851523  405191 cri.go:89] found id: ""
	I1206 10:53:07.851537  405191 logs.go:282] 0 containers: []
	W1206 10:53:07.851543  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:07.851549  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:07.851627  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:07.878807  405191 cri.go:89] found id: ""
	I1206 10:53:07.878831  405191 logs.go:282] 0 containers: []
	W1206 10:53:07.878838  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:07.878844  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:07.878915  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:07.911047  405191 cri.go:89] found id: ""
	I1206 10:53:07.911060  405191 logs.go:282] 0 containers: []
	W1206 10:53:07.911078  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:07.911084  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:07.911155  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:07.937042  405191 cri.go:89] found id: ""
	I1206 10:53:07.937064  405191 logs.go:282] 0 containers: []
	W1206 10:53:07.937072  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:07.937080  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:07.937091  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:08.004528  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:08.004551  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:08.026930  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:08.026947  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:08.109064  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:08.100555   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:08.101020   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:08.102569   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:08.102918   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:08.104386   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:08.100555   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:08.101020   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:08.102569   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:08.102918   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:08.104386   12274 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:08.109086  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:08.109096  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:08.177486  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:08.177508  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:10.706543  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:10.717198  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:10.717262  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:10.743532  405191 cri.go:89] found id: ""
	I1206 10:53:10.743545  405191 logs.go:282] 0 containers: []
	W1206 10:53:10.743552  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:10.743557  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:10.743617  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:10.768882  405191 cri.go:89] found id: ""
	I1206 10:53:10.768897  405191 logs.go:282] 0 containers: []
	W1206 10:53:10.768903  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:10.768908  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:10.768966  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:10.798729  405191 cri.go:89] found id: ""
	I1206 10:53:10.798742  405191 logs.go:282] 0 containers: []
	W1206 10:53:10.798751  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:10.798756  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:10.798814  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:10.823956  405191 cri.go:89] found id: ""
	I1206 10:53:10.823971  405191 logs.go:282] 0 containers: []
	W1206 10:53:10.823978  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:10.823984  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:10.824054  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:10.849242  405191 cri.go:89] found id: ""
	I1206 10:53:10.849271  405191 logs.go:282] 0 containers: []
	W1206 10:53:10.849278  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:10.849283  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:10.849351  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:10.876058  405191 cri.go:89] found id: ""
	I1206 10:53:10.876071  405191 logs.go:282] 0 containers: []
	W1206 10:53:10.876078  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:10.876086  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:10.876145  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:10.901170  405191 cri.go:89] found id: ""
	I1206 10:53:10.901184  405191 logs.go:282] 0 containers: []
	W1206 10:53:10.901192  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:10.901199  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:10.901210  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:10.971362  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:10.971388  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:11.005981  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:11.006000  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:11.089894  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:11.089916  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:11.106328  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:11.106365  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:11.174633  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:11.166045   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:11.167001   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:11.168645   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:11.169014   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:11.170539   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:11.166045   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:11.167001   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:11.168645   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:11.169014   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:11.170539   12392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:13.674898  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:13.689619  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:13.689793  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:13.724853  405191 cri.go:89] found id: ""
	I1206 10:53:13.724867  405191 logs.go:282] 0 containers: []
	W1206 10:53:13.724874  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:13.724880  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:13.724939  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:13.751349  405191 cri.go:89] found id: ""
	I1206 10:53:13.751363  405191 logs.go:282] 0 containers: []
	W1206 10:53:13.751369  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:13.751402  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:13.751488  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:13.778380  405191 cri.go:89] found id: ""
	I1206 10:53:13.778395  405191 logs.go:282] 0 containers: []
	W1206 10:53:13.778402  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:13.778408  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:13.778474  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:13.806068  405191 cri.go:89] found id: ""
	I1206 10:53:13.806081  405191 logs.go:282] 0 containers: []
	W1206 10:53:13.806088  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:13.806093  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:13.806150  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:13.831347  405191 cri.go:89] found id: ""
	I1206 10:53:13.831360  405191 logs.go:282] 0 containers: []
	W1206 10:53:13.831367  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:13.831410  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:13.831494  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:13.856962  405191 cri.go:89] found id: ""
	I1206 10:53:13.856976  405191 logs.go:282] 0 containers: []
	W1206 10:53:13.856983  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:13.856994  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:13.857057  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:13.883227  405191 cri.go:89] found id: ""
	I1206 10:53:13.883241  405191 logs.go:282] 0 containers: []
	W1206 10:53:13.883248  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:13.883256  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:13.883268  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:13.912731  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:13.912749  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:13.981562  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:13.981581  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:13.997805  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:13.997822  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:14.076333  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:14.066553   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:14.067750   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:14.068525   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:14.070431   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:14.071166   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:14.066553   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:14.067750   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:14.068525   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:14.070431   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:14.071166   12495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:14.076343  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:14.076355  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:16.646007  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:16.656726  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:16.656822  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:16.682515  405191 cri.go:89] found id: ""
	I1206 10:53:16.682529  405191 logs.go:282] 0 containers: []
	W1206 10:53:16.682535  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:16.682541  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:16.682609  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:16.708327  405191 cri.go:89] found id: ""
	I1206 10:53:16.708341  405191 logs.go:282] 0 containers: []
	W1206 10:53:16.708359  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:16.708365  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:16.708433  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:16.744002  405191 cri.go:89] found id: ""
	I1206 10:53:16.744023  405191 logs.go:282] 0 containers: []
	W1206 10:53:16.744032  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:16.744037  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:16.744099  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:16.771487  405191 cri.go:89] found id: ""
	I1206 10:53:16.771501  405191 logs.go:282] 0 containers: []
	W1206 10:53:16.771509  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:16.771514  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:16.771594  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:16.799494  405191 cri.go:89] found id: ""
	I1206 10:53:16.799507  405191 logs.go:282] 0 containers: []
	W1206 10:53:16.799514  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:16.799520  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:16.799595  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:16.825114  405191 cri.go:89] found id: ""
	I1206 10:53:16.825128  405191 logs.go:282] 0 containers: []
	W1206 10:53:16.825135  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:16.825141  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:16.825204  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:16.851277  405191 cri.go:89] found id: ""
	I1206 10:53:16.851304  405191 logs.go:282] 0 containers: []
	W1206 10:53:16.851312  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:16.851319  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:16.851329  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:16.880918  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:16.880935  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:16.946617  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:16.946636  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:16.961739  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:16.961756  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:17.047880  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:17.038809   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:17.039588   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:17.041249   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:17.041748   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:17.043299   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:17.038809   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:17.039588   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:17.041249   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:17.041748   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:17.043299   12598 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:17.047890  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:17.047901  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:19.616855  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:19.627228  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:19.627288  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:19.654067  405191 cri.go:89] found id: ""
	I1206 10:53:19.654081  405191 logs.go:282] 0 containers: []
	W1206 10:53:19.654088  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:19.654093  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:19.654166  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:19.679488  405191 cri.go:89] found id: ""
	I1206 10:53:19.679502  405191 logs.go:282] 0 containers: []
	W1206 10:53:19.679509  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:19.679515  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:19.679573  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:19.706620  405191 cri.go:89] found id: ""
	I1206 10:53:19.706635  405191 logs.go:282] 0 containers: []
	W1206 10:53:19.706642  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:19.706647  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:19.706706  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:19.734381  405191 cri.go:89] found id: ""
	I1206 10:53:19.734395  405191 logs.go:282] 0 containers: []
	W1206 10:53:19.734406  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:19.734412  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:19.734476  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:19.761415  405191 cri.go:89] found id: ""
	I1206 10:53:19.761429  405191 logs.go:282] 0 containers: []
	W1206 10:53:19.761436  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:19.761441  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:19.761502  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:19.787176  405191 cri.go:89] found id: ""
	I1206 10:53:19.787190  405191 logs.go:282] 0 containers: []
	W1206 10:53:19.787203  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:19.787209  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:19.787270  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:19.813067  405191 cri.go:89] found id: ""
	I1206 10:53:19.813081  405191 logs.go:282] 0 containers: []
	W1206 10:53:19.813088  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:19.813096  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:19.813105  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:19.878821  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:19.878840  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:19.894664  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:19.894680  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:19.965061  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:19.956101   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:19.957218   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:19.958973   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:19.959413   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:19.960938   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:19.956101   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:19.957218   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:19.958973   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:19.959413   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:19.960938   12690 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:19.965101  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:19.965111  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:20.038434  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:20.038456  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:22.572942  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:22.583202  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:22.583273  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:22.608534  405191 cri.go:89] found id: ""
	I1206 10:53:22.608548  405191 logs.go:282] 0 containers: []
	W1206 10:53:22.608556  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:22.608561  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:22.608623  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:22.637655  405191 cri.go:89] found id: ""
	I1206 10:53:22.637673  405191 logs.go:282] 0 containers: []
	W1206 10:53:22.637680  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:22.637685  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:22.637748  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:22.666908  405191 cri.go:89] found id: ""
	I1206 10:53:22.666922  405191 logs.go:282] 0 containers: []
	W1206 10:53:22.666929  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:22.666935  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:22.666995  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:22.694611  405191 cri.go:89] found id: ""
	I1206 10:53:22.694625  405191 logs.go:282] 0 containers: []
	W1206 10:53:22.694633  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:22.694638  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:22.694705  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:22.720468  405191 cri.go:89] found id: ""
	I1206 10:53:22.720482  405191 logs.go:282] 0 containers: []
	W1206 10:53:22.720489  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:22.720494  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:22.720551  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:22.750061  405191 cri.go:89] found id: ""
	I1206 10:53:22.750075  405191 logs.go:282] 0 containers: []
	W1206 10:53:22.750082  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:22.750087  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:22.750148  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:22.778201  405191 cri.go:89] found id: ""
	I1206 10:53:22.778216  405191 logs.go:282] 0 containers: []
	W1206 10:53:22.778223  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:22.778230  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:22.778241  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:22.848689  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:22.848710  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:22.878893  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:22.878908  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:22.945043  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:22.945065  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:22.960966  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:22.960982  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:23.041735  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:23.033031   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:23.033838   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:23.035561   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:23.036147   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:23.037681   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:23.033031   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:23.033838   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:23.035561   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:23.036147   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:23.037681   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:25.543429  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:25.553845  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:25.553906  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:25.580411  405191 cri.go:89] found id: ""
	I1206 10:53:25.580427  405191 logs.go:282] 0 containers: []
	W1206 10:53:25.580434  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:25.580439  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:25.580498  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:25.610347  405191 cri.go:89] found id: ""
	I1206 10:53:25.610361  405191 logs.go:282] 0 containers: []
	W1206 10:53:25.610368  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:25.610373  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:25.610430  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:25.637376  405191 cri.go:89] found id: ""
	I1206 10:53:25.637390  405191 logs.go:282] 0 containers: []
	W1206 10:53:25.637398  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:25.637403  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:25.637463  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:25.666544  405191 cri.go:89] found id: ""
	I1206 10:53:25.666558  405191 logs.go:282] 0 containers: []
	W1206 10:53:25.666572  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:25.666577  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:25.666636  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:25.692777  405191 cri.go:89] found id: ""
	I1206 10:53:25.692791  405191 logs.go:282] 0 containers: []
	W1206 10:53:25.692798  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:25.692803  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:25.692865  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:25.721819  405191 cri.go:89] found id: ""
	I1206 10:53:25.721833  405191 logs.go:282] 0 containers: []
	W1206 10:53:25.721841  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:25.721845  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:25.721901  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:25.749420  405191 cri.go:89] found id: ""
	I1206 10:53:25.749435  405191 logs.go:282] 0 containers: []
	W1206 10:53:25.749442  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:25.749450  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:25.749461  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:25.817956  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:25.817979  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:25.847454  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:25.847480  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:25.913445  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:25.913464  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:25.928310  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:25.928326  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:26.010257  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:25.998143   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:25.999802   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:26.001851   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:26.002260   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:26.005429   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:25.998143   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:25.999802   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:26.001851   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:26.002260   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:26.005429   12912 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:28.510540  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:28.521536  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:28.521597  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:28.549848  405191 cri.go:89] found id: ""
	I1206 10:53:28.549862  405191 logs.go:282] 0 containers: []
	W1206 10:53:28.549869  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:28.549880  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:28.549941  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:28.574916  405191 cri.go:89] found id: ""
	I1206 10:53:28.574929  405191 logs.go:282] 0 containers: []
	W1206 10:53:28.574937  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:28.574941  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:28.575001  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:28.603948  405191 cri.go:89] found id: ""
	I1206 10:53:28.603963  405191 logs.go:282] 0 containers: []
	W1206 10:53:28.603971  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:28.603976  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:28.604038  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:28.633100  405191 cri.go:89] found id: ""
	I1206 10:53:28.633114  405191 logs.go:282] 0 containers: []
	W1206 10:53:28.633121  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:28.633127  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:28.633186  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:28.658360  405191 cri.go:89] found id: ""
	I1206 10:53:28.658374  405191 logs.go:282] 0 containers: []
	W1206 10:53:28.658381  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:28.658386  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:28.658450  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:28.686919  405191 cri.go:89] found id: ""
	I1206 10:53:28.686933  405191 logs.go:282] 0 containers: []
	W1206 10:53:28.686949  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:28.686955  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:28.687012  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:28.713970  405191 cri.go:89] found id: ""
	I1206 10:53:28.713984  405191 logs.go:282] 0 containers: []
	W1206 10:53:28.713991  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:28.714001  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:28.714011  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:28.783354  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:28.783415  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:28.799765  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:28.799785  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:28.875190  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:28.865163   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:28.865941   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:28.867993   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:28.868552   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:28.870594   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:28.865163   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:28.865941   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:28.867993   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:28.868552   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:28.870594   13003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:28.875200  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:28.875211  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:28.947238  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:28.947258  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:31.487136  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:31.497608  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:31.497670  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:31.524319  405191 cri.go:89] found id: ""
	I1206 10:53:31.524333  405191 logs.go:282] 0 containers: []
	W1206 10:53:31.524341  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:31.524347  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:31.524409  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:31.550830  405191 cri.go:89] found id: ""
	I1206 10:53:31.550845  405191 logs.go:282] 0 containers: []
	W1206 10:53:31.550852  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:31.550857  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:31.550925  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:31.577502  405191 cri.go:89] found id: ""
	I1206 10:53:31.577516  405191 logs.go:282] 0 containers: []
	W1206 10:53:31.577523  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:31.577528  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:31.577587  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:31.604074  405191 cri.go:89] found id: ""
	I1206 10:53:31.604088  405191 logs.go:282] 0 containers: []
	W1206 10:53:31.604095  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:31.604100  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:31.604157  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:31.630962  405191 cri.go:89] found id: ""
	I1206 10:53:31.630976  405191 logs.go:282] 0 containers: []
	W1206 10:53:31.630984  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:31.630989  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:31.631053  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:31.656604  405191 cri.go:89] found id: ""
	I1206 10:53:31.656619  405191 logs.go:282] 0 containers: []
	W1206 10:53:31.656626  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:31.656632  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:31.656695  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:31.682731  405191 cri.go:89] found id: ""
	I1206 10:53:31.682745  405191 logs.go:282] 0 containers: []
	W1206 10:53:31.682752  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:31.682760  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:31.682771  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:31.715043  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:31.715059  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:31.780742  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:31.780762  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:31.795393  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:31.795410  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:31.863799  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:31.855344   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:31.856015   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:31.857739   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:31.858189   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:31.859847   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:31.855344   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:31.856015   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:31.857739   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:31.858189   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:31.859847   13121 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:31.863809  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:31.863820  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:34.432706  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:34.442775  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:34.442837  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:34.468444  405191 cri.go:89] found id: ""
	I1206 10:53:34.468458  405191 logs.go:282] 0 containers: []
	W1206 10:53:34.468465  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:34.468471  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:34.468536  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:34.494328  405191 cri.go:89] found id: ""
	I1206 10:53:34.494343  405191 logs.go:282] 0 containers: []
	W1206 10:53:34.494350  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:34.494356  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:34.494418  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:34.527045  405191 cri.go:89] found id: ""
	I1206 10:53:34.527060  405191 logs.go:282] 0 containers: []
	W1206 10:53:34.527068  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:34.527076  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:34.527139  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:34.554315  405191 cri.go:89] found id: ""
	I1206 10:53:34.554328  405191 logs.go:282] 0 containers: []
	W1206 10:53:34.554335  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:34.554340  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:34.554408  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:34.579994  405191 cri.go:89] found id: ""
	I1206 10:53:34.580009  405191 logs.go:282] 0 containers: []
	W1206 10:53:34.580024  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:34.580030  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:34.580093  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:34.608896  405191 cri.go:89] found id: ""
	I1206 10:53:34.608910  405191 logs.go:282] 0 containers: []
	W1206 10:53:34.608917  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:34.608925  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:34.608983  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:34.638506  405191 cri.go:89] found id: ""
	I1206 10:53:34.638521  405191 logs.go:282] 0 containers: []
	W1206 10:53:34.638528  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:34.638536  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:34.638549  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:34.700281  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:34.691941   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:34.692724   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:34.693733   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:34.694312   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:34.696014   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:34.691941   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:34.692724   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:34.693733   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:34.694312   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:34.696014   13210 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:34.700290  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:34.700302  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:34.773019  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:34.773040  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:34.803610  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:34.803628  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:34.870473  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:34.870498  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:37.386935  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:37.397529  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:37.397618  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:37.422529  405191 cri.go:89] found id: ""
	I1206 10:53:37.422543  405191 logs.go:282] 0 containers: []
	W1206 10:53:37.422550  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:37.422556  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:37.422613  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:37.447810  405191 cri.go:89] found id: ""
	I1206 10:53:37.447824  405191 logs.go:282] 0 containers: []
	W1206 10:53:37.447830  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:37.447836  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:37.447895  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:37.473775  405191 cri.go:89] found id: ""
	I1206 10:53:37.473794  405191 logs.go:282] 0 containers: []
	W1206 10:53:37.473801  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:37.473806  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:37.473862  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:37.499349  405191 cri.go:89] found id: ""
	I1206 10:53:37.499362  405191 logs.go:282] 0 containers: []
	W1206 10:53:37.499370  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:37.499400  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:37.499468  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:37.526194  405191 cri.go:89] found id: ""
	I1206 10:53:37.526208  405191 logs.go:282] 0 containers: []
	W1206 10:53:37.526216  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:37.526221  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:37.526286  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:37.552021  405191 cri.go:89] found id: ""
	I1206 10:53:37.552041  405191 logs.go:282] 0 containers: []
	W1206 10:53:37.552049  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:37.552054  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:37.552113  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:37.577455  405191 cri.go:89] found id: ""
	I1206 10:53:37.577469  405191 logs.go:282] 0 containers: []
	W1206 10:53:37.577476  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:37.577484  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:37.577495  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:37.605307  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:37.605324  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:37.674813  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:37.674836  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:37.689252  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:37.689268  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:37.751707  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:37.743090   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:37.743746   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:37.745542   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:37.746167   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:37.747937   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:37.743090   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:37.743746   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:37.745542   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:37.746167   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:37.747937   13331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:37.751719  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:37.751730  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:40.320654  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:40.331310  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:40.331372  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:40.357691  405191 cri.go:89] found id: ""
	I1206 10:53:40.357706  405191 logs.go:282] 0 containers: []
	W1206 10:53:40.357721  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:40.357726  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:40.357789  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:40.383818  405191 cri.go:89] found id: ""
	I1206 10:53:40.383833  405191 logs.go:282] 0 containers: []
	W1206 10:53:40.383841  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:40.383847  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:40.383904  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:40.412121  405191 cri.go:89] found id: ""
	I1206 10:53:40.412134  405191 logs.go:282] 0 containers: []
	W1206 10:53:40.412141  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:40.412146  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:40.412204  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:40.438527  405191 cri.go:89] found id: ""
	I1206 10:53:40.438542  405191 logs.go:282] 0 containers: []
	W1206 10:53:40.438549  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:40.438554  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:40.438616  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:40.465329  405191 cri.go:89] found id: ""
	I1206 10:53:40.465344  405191 logs.go:282] 0 containers: []
	W1206 10:53:40.465351  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:40.465356  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:40.465420  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:40.491939  405191 cri.go:89] found id: ""
	I1206 10:53:40.491952  405191 logs.go:282] 0 containers: []
	W1206 10:53:40.491960  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:40.491965  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:40.492029  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:40.516801  405191 cri.go:89] found id: ""
	I1206 10:53:40.516821  405191 logs.go:282] 0 containers: []
	W1206 10:53:40.516828  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:40.516836  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:40.516848  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:40.593042  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:40.593062  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:40.608966  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:40.608986  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:40.675818  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:40.665869   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:40.667834   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:40.668210   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:40.669803   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:40.670394   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:40.665869   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:40.667834   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:40.668210   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:40.669803   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:40.670394   13425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:40.675828  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:40.675841  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:40.744680  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:40.744702  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:43.275550  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:43.285722  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:43.285783  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:43.312235  405191 cri.go:89] found id: ""
	I1206 10:53:43.312249  405191 logs.go:282] 0 containers: []
	W1206 10:53:43.312262  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:43.312278  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:43.312337  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:43.338204  405191 cri.go:89] found id: ""
	I1206 10:53:43.338219  405191 logs.go:282] 0 containers: []
	W1206 10:53:43.338226  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:43.338249  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:43.338321  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:43.363434  405191 cri.go:89] found id: ""
	I1206 10:53:43.363455  405191 logs.go:282] 0 containers: []
	W1206 10:53:43.363463  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:43.363480  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:43.363562  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:43.390724  405191 cri.go:89] found id: ""
	I1206 10:53:43.390738  405191 logs.go:282] 0 containers: []
	W1206 10:53:43.390745  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:43.390750  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:43.390824  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:43.416427  405191 cri.go:89] found id: ""
	I1206 10:53:43.416442  405191 logs.go:282] 0 containers: []
	W1206 10:53:43.416449  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:43.416454  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:43.416511  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:43.446598  405191 cri.go:89] found id: ""
	I1206 10:53:43.446612  405191 logs.go:282] 0 containers: []
	W1206 10:53:43.446619  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:43.446625  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:43.446695  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:43.472759  405191 cri.go:89] found id: ""
	I1206 10:53:43.472773  405191 logs.go:282] 0 containers: []
	W1206 10:53:43.472779  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:43.472787  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:43.472797  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:43.538686  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:43.538706  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:43.553731  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:43.553746  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:43.618535  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:43.609715   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:43.610485   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:43.612195   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:43.612812   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:43.614536   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:43.609715   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:43.610485   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:43.612195   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:43.612812   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:43.614536   13531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:43.618556  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:43.618570  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:43.690132  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:43.690152  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:46.225047  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:46.236105  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:46.236179  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:46.269036  405191 cri.go:89] found id: ""
	I1206 10:53:46.269066  405191 logs.go:282] 0 containers: []
	W1206 10:53:46.269074  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:46.269079  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:46.269151  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:46.300616  405191 cri.go:89] found id: ""
	I1206 10:53:46.300631  405191 logs.go:282] 0 containers: []
	W1206 10:53:46.300639  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:46.300645  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:46.300707  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:46.330077  405191 cri.go:89] found id: ""
	I1206 10:53:46.330102  405191 logs.go:282] 0 containers: []
	W1206 10:53:46.330110  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:46.330115  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:46.330189  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:46.361893  405191 cri.go:89] found id: ""
	I1206 10:53:46.361908  405191 logs.go:282] 0 containers: []
	W1206 10:53:46.361915  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:46.361920  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:46.361991  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:46.387920  405191 cri.go:89] found id: ""
	I1206 10:53:46.387934  405191 logs.go:282] 0 containers: []
	W1206 10:53:46.387941  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:46.387947  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:46.388006  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:46.415440  405191 cri.go:89] found id: ""
	I1206 10:53:46.415463  405191 logs.go:282] 0 containers: []
	W1206 10:53:46.415470  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:46.415475  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:46.415534  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:46.442198  405191 cri.go:89] found id: ""
	I1206 10:53:46.442211  405191 logs.go:282] 0 containers: []
	W1206 10:53:46.442219  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:46.442226  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:46.442239  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:46.457274  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:46.457290  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:46.520346  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:46.512290   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:46.512824   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:46.514476   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:46.514946   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:46.516438   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:46.512290   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:46.512824   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:46.514476   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:46.514946   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:46.516438   13632 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:46.520388  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:46.520399  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:46.595642  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:46.595673  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:46.626749  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:46.626769  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:49.193445  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:49.203743  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:49.203807  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:49.233557  405191 cri.go:89] found id: ""
	I1206 10:53:49.233571  405191 logs.go:282] 0 containers: []
	W1206 10:53:49.233578  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:49.233583  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:49.233643  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:49.265569  405191 cri.go:89] found id: ""
	I1206 10:53:49.265583  405191 logs.go:282] 0 containers: []
	W1206 10:53:49.265590  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:49.265595  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:49.265651  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:49.296146  405191 cri.go:89] found id: ""
	I1206 10:53:49.296159  405191 logs.go:282] 0 containers: []
	W1206 10:53:49.296166  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:49.296172  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:49.296232  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:49.321471  405191 cri.go:89] found id: ""
	I1206 10:53:49.321485  405191 logs.go:282] 0 containers: []
	W1206 10:53:49.321492  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:49.321498  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:49.321556  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:49.346537  405191 cri.go:89] found id: ""
	I1206 10:53:49.346551  405191 logs.go:282] 0 containers: []
	W1206 10:53:49.346571  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:49.346577  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:49.346693  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:49.372292  405191 cri.go:89] found id: ""
	I1206 10:53:49.372307  405191 logs.go:282] 0 containers: []
	W1206 10:53:49.372314  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:49.372320  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:49.372382  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:49.397395  405191 cri.go:89] found id: ""
	I1206 10:53:49.397408  405191 logs.go:282] 0 containers: []
	W1206 10:53:49.397415  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:49.397422  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:49.397432  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:49.464359  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:49.464378  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:49.479746  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:49.479762  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:49.542949  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:49.534167   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:49.534752   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:49.536598   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:49.537091   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:49.538580   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:49.534167   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:49.534752   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:49.536598   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:49.537091   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:49.538580   13743 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:49.542959  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:49.542969  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:49.612749  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:49.612769  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:52.142276  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:52.152804  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:52.152867  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:52.179560  405191 cri.go:89] found id: ""
	I1206 10:53:52.179575  405191 logs.go:282] 0 containers: []
	W1206 10:53:52.179582  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:52.179587  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:52.179642  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:52.204827  405191 cri.go:89] found id: ""
	I1206 10:53:52.204842  405191 logs.go:282] 0 containers: []
	W1206 10:53:52.204849  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:52.204854  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:52.204917  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:52.250790  405191 cri.go:89] found id: ""
	I1206 10:53:52.250804  405191 logs.go:282] 0 containers: []
	W1206 10:53:52.250811  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:52.250816  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:52.250886  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:52.282140  405191 cri.go:89] found id: ""
	I1206 10:53:52.282153  405191 logs.go:282] 0 containers: []
	W1206 10:53:52.282161  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:52.282166  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:52.282225  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:52.314373  405191 cri.go:89] found id: ""
	I1206 10:53:52.314387  405191 logs.go:282] 0 containers: []
	W1206 10:53:52.314395  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:52.314400  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:52.314471  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:52.339037  405191 cri.go:89] found id: ""
	I1206 10:53:52.339051  405191 logs.go:282] 0 containers: []
	W1206 10:53:52.339058  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:52.339064  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:52.339124  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:52.366113  405191 cri.go:89] found id: ""
	I1206 10:53:52.366127  405191 logs.go:282] 0 containers: []
	W1206 10:53:52.366134  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:52.366142  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:52.366152  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:52.436368  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:52.436388  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:52.451468  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:52.451487  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:52.518739  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:52.509542   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:52.509966   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:52.511603   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:52.511955   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:52.513754   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:52.509542   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:52.509966   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:52.511603   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:52.511955   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:52.513754   13849 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:52.518760  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:52.518777  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:52.593784  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:52.593805  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:55.124735  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:55.135510  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:55.135574  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:55.162613  405191 cri.go:89] found id: ""
	I1206 10:53:55.162626  405191 logs.go:282] 0 containers: []
	W1206 10:53:55.162633  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:55.162638  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:55.162703  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:55.189655  405191 cri.go:89] found id: ""
	I1206 10:53:55.189669  405191 logs.go:282] 0 containers: []
	W1206 10:53:55.189676  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:55.189682  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:55.189786  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:55.215289  405191 cri.go:89] found id: ""
	I1206 10:53:55.215303  405191 logs.go:282] 0 containers: []
	W1206 10:53:55.215310  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:55.215315  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:55.215402  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:55.247890  405191 cri.go:89] found id: ""
	I1206 10:53:55.247913  405191 logs.go:282] 0 containers: []
	W1206 10:53:55.247921  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:55.247926  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:55.247992  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:55.283368  405191 cri.go:89] found id: ""
	I1206 10:53:55.283409  405191 logs.go:282] 0 containers: []
	W1206 10:53:55.283416  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:55.283422  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:55.283516  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:55.310596  405191 cri.go:89] found id: ""
	I1206 10:53:55.310609  405191 logs.go:282] 0 containers: []
	W1206 10:53:55.310627  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:55.310632  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:55.310712  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:55.337361  405191 cri.go:89] found id: ""
	I1206 10:53:55.337374  405191 logs.go:282] 0 containers: []
	W1206 10:53:55.337381  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:55.337389  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:55.337399  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:55.404341  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:55.404361  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:55.419687  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:55.419705  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:55.485498  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:55.476614   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:55.477821   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:55.478840   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:55.479810   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:55.480435   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:55.476614   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:55.477821   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:55.478840   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:55.479810   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:55.480435   13958 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:55.485509  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:55.485522  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:55.555911  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:55.555932  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:53:58.088179  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:53:58.099010  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:53:58.099069  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:53:58.124686  405191 cri.go:89] found id: ""
	I1206 10:53:58.124700  405191 logs.go:282] 0 containers: []
	W1206 10:53:58.124710  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:53:58.124716  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:53:58.124773  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:53:58.149717  405191 cri.go:89] found id: ""
	I1206 10:53:58.149730  405191 logs.go:282] 0 containers: []
	W1206 10:53:58.149738  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:53:58.149743  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:53:58.149800  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:53:58.177293  405191 cri.go:89] found id: ""
	I1206 10:53:58.177307  405191 logs.go:282] 0 containers: []
	W1206 10:53:58.177314  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:53:58.177319  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:53:58.177389  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:53:58.203540  405191 cri.go:89] found id: ""
	I1206 10:53:58.203554  405191 logs.go:282] 0 containers: []
	W1206 10:53:58.203562  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:53:58.203567  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:53:58.203632  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:53:58.237354  405191 cri.go:89] found id: ""
	I1206 10:53:58.237377  405191 logs.go:282] 0 containers: []
	W1206 10:53:58.237385  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:53:58.237390  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:53:58.237459  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:53:58.269725  405191 cri.go:89] found id: ""
	I1206 10:53:58.269739  405191 logs.go:282] 0 containers: []
	W1206 10:53:58.269746  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:53:58.269751  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:53:58.269821  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:53:58.297406  405191 cri.go:89] found id: ""
	I1206 10:53:58.297420  405191 logs.go:282] 0 containers: []
	W1206 10:53:58.297427  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:53:58.297435  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:53:58.297445  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:53:58.363296  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:53:58.363319  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:53:58.379154  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:53:58.379170  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:53:58.448306  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:53:58.438857   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:58.439654   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:58.441442   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:58.441790   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:58.443511   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:53:58.438857   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:58.439654   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:58.441442   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:58.441790   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:53:58.443511   14061 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:53:58.448317  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:53:58.448331  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:53:58.518384  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:53:58.518408  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:01.052183  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:01.062404  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:01.062462  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:01.087509  405191 cri.go:89] found id: ""
	I1206 10:54:01.087523  405191 logs.go:282] 0 containers: []
	W1206 10:54:01.087530  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:01.087536  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:01.087598  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:01.113371  405191 cri.go:89] found id: ""
	I1206 10:54:01.113385  405191 logs.go:282] 0 containers: []
	W1206 10:54:01.113392  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:01.113397  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:01.113456  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:01.140194  405191 cri.go:89] found id: ""
	I1206 10:54:01.140208  405191 logs.go:282] 0 containers: []
	W1206 10:54:01.140214  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:01.140220  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:01.140282  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:01.166431  405191 cri.go:89] found id: ""
	I1206 10:54:01.166445  405191 logs.go:282] 0 containers: []
	W1206 10:54:01.166452  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:01.166460  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:01.166523  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:01.195742  405191 cri.go:89] found id: ""
	I1206 10:54:01.195756  405191 logs.go:282] 0 containers: []
	W1206 10:54:01.195764  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:01.195769  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:01.195835  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:01.228731  405191 cri.go:89] found id: ""
	I1206 10:54:01.228746  405191 logs.go:282] 0 containers: []
	W1206 10:54:01.228753  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:01.228759  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:01.228821  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:01.260175  405191 cri.go:89] found id: ""
	I1206 10:54:01.260189  405191 logs.go:282] 0 containers: []
	W1206 10:54:01.260196  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:01.260204  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:01.260214  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:01.337819  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:01.337839  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:01.353486  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:01.353502  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:01.423278  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:01.414904   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:01.415292   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:01.417033   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:01.417517   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:01.418780   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:01.414904   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:01.415292   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:01.417033   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:01.417517   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:01.418780   14163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:01.423288  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:01.423299  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:01.492536  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:01.492556  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:04.028526  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:04.039535  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:04.039600  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:04.069150  405191 cri.go:89] found id: ""
	I1206 10:54:04.069164  405191 logs.go:282] 0 containers: []
	W1206 10:54:04.069172  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:04.069177  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:04.069238  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:04.100343  405191 cri.go:89] found id: ""
	I1206 10:54:04.100357  405191 logs.go:282] 0 containers: []
	W1206 10:54:04.100364  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:04.100369  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:04.100431  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:04.127347  405191 cri.go:89] found id: ""
	I1206 10:54:04.127361  405191 logs.go:282] 0 containers: []
	W1206 10:54:04.127368  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:04.127395  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:04.127466  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:04.154542  405191 cri.go:89] found id: ""
	I1206 10:54:04.154557  405191 logs.go:282] 0 containers: []
	W1206 10:54:04.154564  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:04.154569  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:04.154628  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:04.181647  405191 cri.go:89] found id: ""
	I1206 10:54:04.181661  405191 logs.go:282] 0 containers: []
	W1206 10:54:04.181668  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:04.181676  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:04.181739  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:04.210872  405191 cri.go:89] found id: ""
	I1206 10:54:04.210886  405191 logs.go:282] 0 containers: []
	W1206 10:54:04.210893  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:04.210899  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:04.210962  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:04.246454  405191 cri.go:89] found id: ""
	I1206 10:54:04.246468  405191 logs.go:282] 0 containers: []
	W1206 10:54:04.246482  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:04.246490  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:04.246501  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:04.322848  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:04.322872  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:04.338928  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:04.338945  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:04.409905  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:04.400164   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:04.400961   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:04.402781   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:04.403461   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:04.404662   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:04.400164   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:04.400961   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:04.402781   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:04.403461   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:04.404662   14270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:04.409916  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:04.409928  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:04.480369  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:04.480389  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:07.012345  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:07.022891  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:07.022962  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:07.049835  405191 cri.go:89] found id: ""
	I1206 10:54:07.049849  405191 logs.go:282] 0 containers: []
	W1206 10:54:07.049856  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:07.049861  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:07.049925  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:07.076617  405191 cri.go:89] found id: ""
	I1206 10:54:07.076631  405191 logs.go:282] 0 containers: []
	W1206 10:54:07.076637  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:07.076643  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:07.076704  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:07.103202  405191 cri.go:89] found id: ""
	I1206 10:54:07.103216  405191 logs.go:282] 0 containers: []
	W1206 10:54:07.103223  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:07.103229  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:07.103288  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:07.129964  405191 cri.go:89] found id: ""
	I1206 10:54:07.129977  405191 logs.go:282] 0 containers: []
	W1206 10:54:07.129984  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:07.129989  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:07.130048  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:07.157459  405191 cri.go:89] found id: ""
	I1206 10:54:07.157473  405191 logs.go:282] 0 containers: []
	W1206 10:54:07.157480  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:07.157485  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:07.157551  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:07.183797  405191 cri.go:89] found id: ""
	I1206 10:54:07.183811  405191 logs.go:282] 0 containers: []
	W1206 10:54:07.183818  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:07.183823  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:07.183881  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:07.209675  405191 cri.go:89] found id: ""
	I1206 10:54:07.209689  405191 logs.go:282] 0 containers: []
	W1206 10:54:07.209697  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:07.209704  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:07.209715  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:07.228202  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:07.228225  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:07.312770  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:07.304201   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:07.304672   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:07.306492   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:07.307083   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:07.308768   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:07.304201   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:07.304672   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:07.306492   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:07.307083   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:07.308768   14373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:07.312782  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:07.312792  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:07.383254  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:07.383275  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:07.414045  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:07.414060  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:09.985551  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:09.995745  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:09.995806  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:10.030868  405191 cri.go:89] found id: ""
	I1206 10:54:10.030884  405191 logs.go:282] 0 containers: []
	W1206 10:54:10.030892  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:10.030898  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:10.030967  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:10.060505  405191 cri.go:89] found id: ""
	I1206 10:54:10.060520  405191 logs.go:282] 0 containers: []
	W1206 10:54:10.060527  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:10.060532  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:10.060596  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:10.087945  405191 cri.go:89] found id: ""
	I1206 10:54:10.087979  405191 logs.go:282] 0 containers: []
	W1206 10:54:10.087986  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:10.087992  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:10.088069  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:10.116434  405191 cri.go:89] found id: ""
	I1206 10:54:10.116448  405191 logs.go:282] 0 containers: []
	W1206 10:54:10.116455  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:10.116461  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:10.116523  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:10.144560  405191 cri.go:89] found id: ""
	I1206 10:54:10.144572  405191 logs.go:282] 0 containers: []
	W1206 10:54:10.144579  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:10.144584  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:10.144645  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:10.173019  405191 cri.go:89] found id: ""
	I1206 10:54:10.173033  405191 logs.go:282] 0 containers: []
	W1206 10:54:10.173040  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:10.173046  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:10.173105  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:10.200809  405191 cri.go:89] found id: ""
	I1206 10:54:10.200823  405191 logs.go:282] 0 containers: []
	W1206 10:54:10.200830  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:10.200837  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:10.200847  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:10.215623  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:10.215642  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:10.300302  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:10.291573   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:10.292121   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:10.293856   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:10.294451   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:10.295989   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:10.291573   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:10.292121   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:10.293856   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:10.294451   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:10.295989   14479 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:10.300314  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:10.300325  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:10.369603  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:10.369624  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:10.402671  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:10.402687  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:12.968162  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:12.978411  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:12.978473  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:13.010579  405191 cri.go:89] found id: ""
	I1206 10:54:13.010593  405191 logs.go:282] 0 containers: []
	W1206 10:54:13.010601  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:13.010606  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:13.010669  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:13.037103  405191 cri.go:89] found id: ""
	I1206 10:54:13.037118  405191 logs.go:282] 0 containers: []
	W1206 10:54:13.037125  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:13.037131  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:13.037199  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:13.063109  405191 cri.go:89] found id: ""
	I1206 10:54:13.063124  405191 logs.go:282] 0 containers: []
	W1206 10:54:13.063131  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:13.063136  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:13.063195  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:13.088780  405191 cri.go:89] found id: ""
	I1206 10:54:13.088794  405191 logs.go:282] 0 containers: []
	W1206 10:54:13.088801  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:13.088806  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:13.088868  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:13.114682  405191 cri.go:89] found id: ""
	I1206 10:54:13.114696  405191 logs.go:282] 0 containers: []
	W1206 10:54:13.114703  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:13.114708  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:13.114952  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:13.141850  405191 cri.go:89] found id: ""
	I1206 10:54:13.141866  405191 logs.go:282] 0 containers: []
	W1206 10:54:13.141873  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:13.141880  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:13.141945  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:13.167958  405191 cri.go:89] found id: ""
	I1206 10:54:13.167975  405191 logs.go:282] 0 containers: []
	W1206 10:54:13.167982  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:13.167990  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:13.168002  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:13.237314  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:13.237335  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:13.254137  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:13.254164  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:13.322226  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:13.313022   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:13.313640   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:13.315145   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:13.315780   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:13.317512   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:13.313022   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:13.313640   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:13.315145   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:13.315780   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:13.317512   14585 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:13.322237  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:13.322248  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:13.394938  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:13.394958  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:15.923162  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:15.933287  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:15.933346  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:15.958679  405191 cri.go:89] found id: ""
	I1206 10:54:15.958694  405191 logs.go:282] 0 containers: []
	W1206 10:54:15.958701  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:15.958706  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:15.958768  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:15.986252  405191 cri.go:89] found id: ""
	I1206 10:54:15.986267  405191 logs.go:282] 0 containers: []
	W1206 10:54:15.986274  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:15.986279  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:15.986339  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:16.015947  405191 cri.go:89] found id: ""
	I1206 10:54:16.015961  405191 logs.go:282] 0 containers: []
	W1206 10:54:16.015968  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:16.015973  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:16.016038  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:16.046583  405191 cri.go:89] found id: ""
	I1206 10:54:16.046597  405191 logs.go:282] 0 containers: []
	W1206 10:54:16.046604  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:16.046609  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:16.046673  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:16.073401  405191 cri.go:89] found id: ""
	I1206 10:54:16.073415  405191 logs.go:282] 0 containers: []
	W1206 10:54:16.073422  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:16.073428  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:16.073489  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:16.099301  405191 cri.go:89] found id: ""
	I1206 10:54:16.099315  405191 logs.go:282] 0 containers: []
	W1206 10:54:16.099321  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:16.099327  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:16.099409  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:16.132045  405191 cri.go:89] found id: ""
	I1206 10:54:16.132060  405191 logs.go:282] 0 containers: []
	W1206 10:54:16.132067  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:16.132075  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:16.132086  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:16.201949  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:16.191660   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:16.193954   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:16.194695   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:16.196370   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:16.196866   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:16.191660   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:16.193954   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:16.194695   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:16.196370   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:16.196866   14680 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:16.201962  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:16.201972  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:16.277750  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:16.277769  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:16.311130  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:16.311148  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:16.377771  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:16.377793  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:18.893108  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:18.903283  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:18.903345  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:18.927861  405191 cri.go:89] found id: ""
	I1206 10:54:18.927875  405191 logs.go:282] 0 containers: []
	W1206 10:54:18.927882  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:18.927887  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:18.927945  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:18.953460  405191 cri.go:89] found id: ""
	I1206 10:54:18.953474  405191 logs.go:282] 0 containers: []
	W1206 10:54:18.953482  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:18.953486  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:18.953563  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:18.981063  405191 cri.go:89] found id: ""
	I1206 10:54:18.981077  405191 logs.go:282] 0 containers: []
	W1206 10:54:18.981088  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:18.981093  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:18.981154  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:19.011134  405191 cri.go:89] found id: ""
	I1206 10:54:19.011148  405191 logs.go:282] 0 containers: []
	W1206 10:54:19.011156  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:19.011161  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:19.011221  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:19.037866  405191 cri.go:89] found id: ""
	I1206 10:54:19.037889  405191 logs.go:282] 0 containers: []
	W1206 10:54:19.037895  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:19.037901  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:19.037972  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:19.067672  405191 cri.go:89] found id: ""
	I1206 10:54:19.067685  405191 logs.go:282] 0 containers: []
	W1206 10:54:19.067692  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:19.067697  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:19.067753  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:19.092891  405191 cri.go:89] found id: ""
	I1206 10:54:19.092906  405191 logs.go:282] 0 containers: []
	W1206 10:54:19.092913  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:19.092921  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:19.092933  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:19.158186  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:19.149512   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:19.150168   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:19.151831   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:19.152364   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:19.154131   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:19.149512   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:19.150168   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:19.151831   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:19.152364   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:19.154131   14787 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:19.158196  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:19.158209  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:19.231681  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:19.231701  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:19.267680  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:19.267704  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:19.341777  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:19.341796  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:21.856895  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:21.867600  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:21.867659  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:21.899562  405191 cri.go:89] found id: ""
	I1206 10:54:21.899576  405191 logs.go:282] 0 containers: []
	W1206 10:54:21.899583  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:21.899589  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:21.899647  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:21.924433  405191 cri.go:89] found id: ""
	I1206 10:54:21.924446  405191 logs.go:282] 0 containers: []
	W1206 10:54:21.924454  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:21.924459  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:21.924517  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:21.949461  405191 cri.go:89] found id: ""
	I1206 10:54:21.949476  405191 logs.go:282] 0 containers: []
	W1206 10:54:21.949482  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:21.949493  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:21.949550  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:21.976373  405191 cri.go:89] found id: ""
	I1206 10:54:21.976388  405191 logs.go:282] 0 containers: []
	W1206 10:54:21.976396  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:21.976401  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:21.976457  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:22.025051  405191 cri.go:89] found id: ""
	I1206 10:54:22.025074  405191 logs.go:282] 0 containers: []
	W1206 10:54:22.025095  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:22.025101  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:22.025214  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:22.054790  405191 cri.go:89] found id: ""
	I1206 10:54:22.054804  405191 logs.go:282] 0 containers: []
	W1206 10:54:22.054811  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:22.054817  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:22.054873  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:22.081220  405191 cri.go:89] found id: ""
	I1206 10:54:22.081235  405191 logs.go:282] 0 containers: []
	W1206 10:54:22.081242  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:22.081251  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:22.081262  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:22.147339  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:22.147359  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:22.162252  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:22.162268  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:22.233807  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:22.219327   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:22.220102   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:22.225452   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:22.227256   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:22.228864   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:22.219327   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:22.220102   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:22.225452   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:22.227256   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:22.228864   14895 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:22.233819  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:22.233838  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:22.312101  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:22.312123  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:24.852672  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:24.863210  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:24.863271  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:24.889673  405191 cri.go:89] found id: ""
	I1206 10:54:24.889687  405191 logs.go:282] 0 containers: []
	W1206 10:54:24.889695  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:24.889700  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:24.889758  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:24.920816  405191 cri.go:89] found id: ""
	I1206 10:54:24.920830  405191 logs.go:282] 0 containers: []
	W1206 10:54:24.920837  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:24.920842  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:24.920900  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:24.945958  405191 cri.go:89] found id: ""
	I1206 10:54:24.945972  405191 logs.go:282] 0 containers: []
	W1206 10:54:24.945980  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:24.945985  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:24.946046  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:24.970886  405191 cri.go:89] found id: ""
	I1206 10:54:24.970900  405191 logs.go:282] 0 containers: []
	W1206 10:54:24.970907  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:24.970912  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:24.970970  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:25.000298  405191 cri.go:89] found id: ""
	I1206 10:54:25.000315  405191 logs.go:282] 0 containers: []
	W1206 10:54:25.000323  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:25.000329  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:25.000399  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:25.033867  405191 cri.go:89] found id: ""
	I1206 10:54:25.033882  405191 logs.go:282] 0 containers: []
	W1206 10:54:25.033890  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:25.033895  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:25.033960  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:25.060149  405191 cri.go:89] found id: ""
	I1206 10:54:25.060162  405191 logs.go:282] 0 containers: []
	W1206 10:54:25.060169  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:25.060177  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:25.060188  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:25.128734  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:25.120144   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:25.120771   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:25.122547   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:25.123145   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:25.124861   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:25.120144   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:25.120771   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:25.122547   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:25.123145   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:25.124861   14994 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:25.128746  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:25.128757  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:25.198421  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:25.198443  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:25.239321  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:25.239341  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:25.316857  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:25.316878  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:27.833465  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:27.844470  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:27.844528  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:27.870606  405191 cri.go:89] found id: ""
	I1206 10:54:27.870621  405191 logs.go:282] 0 containers: []
	W1206 10:54:27.870628  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:27.870633  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:27.870693  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:27.894893  405191 cri.go:89] found id: ""
	I1206 10:54:27.894906  405191 logs.go:282] 0 containers: []
	W1206 10:54:27.894913  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:27.894918  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:27.894973  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:27.920116  405191 cri.go:89] found id: ""
	I1206 10:54:27.920129  405191 logs.go:282] 0 containers: []
	W1206 10:54:27.920136  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:27.920142  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:27.920201  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:27.946774  405191 cri.go:89] found id: ""
	I1206 10:54:27.946788  405191 logs.go:282] 0 containers: []
	W1206 10:54:27.946798  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:27.946806  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:27.946869  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:27.973164  405191 cri.go:89] found id: ""
	I1206 10:54:27.973178  405191 logs.go:282] 0 containers: []
	W1206 10:54:27.973185  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:27.973190  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:27.973247  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:28.005225  405191 cri.go:89] found id: ""
	I1206 10:54:28.005240  405191 logs.go:282] 0 containers: []
	W1206 10:54:28.005248  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:28.005255  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:28.005329  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:28.034341  405191 cri.go:89] found id: ""
	I1206 10:54:28.034355  405191 logs.go:282] 0 containers: []
	W1206 10:54:28.034362  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:28.034370  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:28.034381  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:28.107547  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:28.107567  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:28.136561  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:28.136578  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:28.206187  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:28.206206  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:28.224556  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:28.224580  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:28.311110  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:28.302520   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:28.303509   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:28.305089   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:28.305582   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:28.307158   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:28.302520   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:28.303509   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:28.305089   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:28.305582   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:28.307158   15128 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:30.811550  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:30.821711  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:30.821769  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:30.850956  405191 cri.go:89] found id: ""
	I1206 10:54:30.850970  405191 logs.go:282] 0 containers: []
	W1206 10:54:30.850979  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:30.850984  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:30.851045  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:30.876542  405191 cri.go:89] found id: ""
	I1206 10:54:30.876558  405191 logs.go:282] 0 containers: []
	W1206 10:54:30.876565  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:30.876571  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:30.876630  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:30.902552  405191 cri.go:89] found id: ""
	I1206 10:54:30.902566  405191 logs.go:282] 0 containers: []
	W1206 10:54:30.902573  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:30.902578  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:30.902635  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:30.928737  405191 cri.go:89] found id: ""
	I1206 10:54:30.928751  405191 logs.go:282] 0 containers: []
	W1206 10:54:30.928758  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:30.928764  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:30.928829  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:30.954309  405191 cri.go:89] found id: ""
	I1206 10:54:30.954323  405191 logs.go:282] 0 containers: []
	W1206 10:54:30.954330  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:30.954335  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:30.954394  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:30.980239  405191 cri.go:89] found id: ""
	I1206 10:54:30.980251  405191 logs.go:282] 0 containers: []
	W1206 10:54:30.980258  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:30.980263  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:30.980319  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:31.010962  405191 cri.go:89] found id: ""
	I1206 10:54:31.010977  405191 logs.go:282] 0 containers: []
	W1206 10:54:31.010985  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:31.010994  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:31.011006  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:31.078259  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:31.069995   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:31.070621   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:31.072176   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:31.072646   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:31.074155   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:31.069995   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:31.070621   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:31.072176   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:31.072646   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:31.074155   15208 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:31.078270  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:31.078282  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:31.147428  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:31.147455  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:31.181028  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:31.181045  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:31.253555  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:31.253574  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:33.770610  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:33.781236  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:33.781299  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:33.806547  405191 cri.go:89] found id: ""
	I1206 10:54:33.806561  405191 logs.go:282] 0 containers: []
	W1206 10:54:33.806568  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:33.806574  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:33.806632  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:33.832359  405191 cri.go:89] found id: ""
	I1206 10:54:33.832371  405191 logs.go:282] 0 containers: []
	W1206 10:54:33.832379  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:33.832383  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:33.832442  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:33.857194  405191 cri.go:89] found id: ""
	I1206 10:54:33.857207  405191 logs.go:282] 0 containers: []
	W1206 10:54:33.857214  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:33.857219  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:33.857280  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:33.886113  405191 cri.go:89] found id: ""
	I1206 10:54:33.886126  405191 logs.go:282] 0 containers: []
	W1206 10:54:33.886133  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:33.886138  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:33.886194  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:33.914351  405191 cri.go:89] found id: ""
	I1206 10:54:33.914364  405191 logs.go:282] 0 containers: []
	W1206 10:54:33.914371  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:33.914376  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:33.914438  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:33.939584  405191 cri.go:89] found id: ""
	I1206 10:54:33.939598  405191 logs.go:282] 0 containers: []
	W1206 10:54:33.939605  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:33.939611  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:33.939683  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:33.965467  405191 cri.go:89] found id: ""
	I1206 10:54:33.965481  405191 logs.go:282] 0 containers: []
	W1206 10:54:33.965488  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:33.965496  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:33.965506  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:34.034434  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:34.034456  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:34.068244  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:34.068263  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:34.136528  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:34.136548  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:34.151695  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:34.151713  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:34.237655  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:34.227619   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:34.228750   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:34.231547   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:34.232096   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:34.233598   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:34.227619   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:34.228750   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:34.231547   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:34.232096   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:34.233598   15334 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:36.737997  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:36.748632  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:36.748739  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:36.774541  405191 cri.go:89] found id: ""
	I1206 10:54:36.774554  405191 logs.go:282] 0 containers: []
	W1206 10:54:36.774563  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:36.774568  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:36.774628  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:36.804563  405191 cri.go:89] found id: ""
	I1206 10:54:36.804577  405191 logs.go:282] 0 containers: []
	W1206 10:54:36.804585  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:36.804590  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:36.804649  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:36.829295  405191 cri.go:89] found id: ""
	I1206 10:54:36.829309  405191 logs.go:282] 0 containers: []
	W1206 10:54:36.829316  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:36.829322  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:36.829384  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:36.854740  405191 cri.go:89] found id: ""
	I1206 10:54:36.854754  405191 logs.go:282] 0 containers: []
	W1206 10:54:36.854761  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:36.854767  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:36.854827  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:36.879535  405191 cri.go:89] found id: ""
	I1206 10:54:36.879548  405191 logs.go:282] 0 containers: []
	W1206 10:54:36.879555  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:36.879560  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:36.879621  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:36.908804  405191 cri.go:89] found id: ""
	I1206 10:54:36.908818  405191 logs.go:282] 0 containers: []
	W1206 10:54:36.908826  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:36.908831  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:36.908891  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:36.935290  405191 cri.go:89] found id: ""
	I1206 10:54:36.935312  405191 logs.go:282] 0 containers: []
	W1206 10:54:36.935320  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:36.935328  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:36.935338  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:37.005221  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:37.005253  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:37.023044  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:37.023070  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:37.090033  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:37.082290   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:37.082864   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:37.084384   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:37.084721   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:37.086198   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:37.082290   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:37.082864   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:37.084384   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:37.084721   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:37.086198   15429 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:37.090044  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:37.090055  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:37.158891  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:37.158911  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:39.688451  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:39.698958  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:39.699020  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:39.725003  405191 cri.go:89] found id: ""
	I1206 10:54:39.725017  405191 logs.go:282] 0 containers: []
	W1206 10:54:39.725024  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:39.725029  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:39.725086  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:39.750186  405191 cri.go:89] found id: ""
	I1206 10:54:39.750208  405191 logs.go:282] 0 containers: []
	W1206 10:54:39.750215  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:39.750221  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:39.750286  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:39.777512  405191 cri.go:89] found id: ""
	I1206 10:54:39.777527  405191 logs.go:282] 0 containers: []
	W1206 10:54:39.777534  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:39.777539  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:39.777598  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:39.805960  405191 cri.go:89] found id: ""
	I1206 10:54:39.805974  405191 logs.go:282] 0 containers: []
	W1206 10:54:39.805981  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:39.805987  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:39.806048  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:39.832070  405191 cri.go:89] found id: ""
	I1206 10:54:39.832086  405191 logs.go:282] 0 containers: []
	W1206 10:54:39.832093  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:39.832099  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:39.832162  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:39.856950  405191 cri.go:89] found id: ""
	I1206 10:54:39.856964  405191 logs.go:282] 0 containers: []
	W1206 10:54:39.856970  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:39.856976  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:39.857034  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:39.882830  405191 cri.go:89] found id: ""
	I1206 10:54:39.882844  405191 logs.go:282] 0 containers: []
	W1206 10:54:39.882851  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:39.882859  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:39.882869  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:39.948996  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:39.949016  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:39.964250  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:39.964266  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:40.040200  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:40.026040   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:40.026898   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:40.028891   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:40.029963   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:40.030727   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:40.026040   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:40.026898   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:40.028891   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:40.029963   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:40.030727   15535 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:40.040211  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:40.040222  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:40.112805  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:40.112828  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:42.645898  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:42.656339  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:42.656399  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:42.681441  405191 cri.go:89] found id: ""
	I1206 10:54:42.681456  405191 logs.go:282] 0 containers: []
	W1206 10:54:42.681462  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:42.681468  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:42.681529  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:42.706692  405191 cri.go:89] found id: ""
	I1206 10:54:42.706706  405191 logs.go:282] 0 containers: []
	W1206 10:54:42.706713  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:42.706718  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:42.706781  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:42.734049  405191 cri.go:89] found id: ""
	I1206 10:54:42.734063  405191 logs.go:282] 0 containers: []
	W1206 10:54:42.734070  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:42.734075  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:42.734136  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:42.759095  405191 cri.go:89] found id: ""
	I1206 10:54:42.759115  405191 logs.go:282] 0 containers: []
	W1206 10:54:42.759123  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:42.759128  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:42.759190  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:42.786861  405191 cri.go:89] found id: ""
	I1206 10:54:42.786875  405191 logs.go:282] 0 containers: []
	W1206 10:54:42.786882  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:42.786887  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:42.786949  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:42.817648  405191 cri.go:89] found id: ""
	I1206 10:54:42.817663  405191 logs.go:282] 0 containers: []
	W1206 10:54:42.817670  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:42.817675  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:42.817738  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:42.844223  405191 cri.go:89] found id: ""
	I1206 10:54:42.844245  405191 logs.go:282] 0 containers: []
	W1206 10:54:42.844253  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:42.844261  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:42.844278  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:42.914866  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:42.904424   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:42.904903   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:42.907237   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:42.908578   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:42.909360   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:42.904424   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:42.904903   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:42.907237   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:42.908578   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:42.909360   15634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:42.914877  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:42.914888  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:42.987160  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:42.987181  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:43.017513  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:43.017529  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:43.084573  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:43.084595  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:45.600685  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:45.611239  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:45.611299  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:45.635510  405191 cri.go:89] found id: ""
	I1206 10:54:45.635525  405191 logs.go:282] 0 containers: []
	W1206 10:54:45.635532  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:45.635538  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:45.635604  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:45.664995  405191 cri.go:89] found id: ""
	I1206 10:54:45.665008  405191 logs.go:282] 0 containers: []
	W1206 10:54:45.665015  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:45.665020  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:45.665077  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:45.691036  405191 cri.go:89] found id: ""
	I1206 10:54:45.691050  405191 logs.go:282] 0 containers: []
	W1206 10:54:45.691057  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:45.691062  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:45.691120  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:45.716374  405191 cri.go:89] found id: ""
	I1206 10:54:45.716388  405191 logs.go:282] 0 containers: []
	W1206 10:54:45.716395  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:45.716400  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:45.716461  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:45.742083  405191 cri.go:89] found id: ""
	I1206 10:54:45.742097  405191 logs.go:282] 0 containers: []
	W1206 10:54:45.742105  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:45.742110  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:45.742177  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:45.767269  405191 cri.go:89] found id: ""
	I1206 10:54:45.767282  405191 logs.go:282] 0 containers: []
	W1206 10:54:45.767290  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:45.767295  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:45.767352  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:45.793130  405191 cri.go:89] found id: ""
	I1206 10:54:45.793144  405191 logs.go:282] 0 containers: []
	W1206 10:54:45.793151  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:45.793158  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:45.793169  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:45.822623  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:45.822639  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:45.889014  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:45.889036  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:45.903697  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:45.903713  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:45.967833  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:45.959169   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:45.960025   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:45.961643   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:45.962228   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:45.963959   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:45.959169   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:45.960025   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:45.961643   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:45.962228   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:45.963959   15755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:45.967843  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:45.967854  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:48.539593  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:48.549488  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:48.549547  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:48.578962  405191 cri.go:89] found id: ""
	I1206 10:54:48.578976  405191 logs.go:282] 0 containers: []
	W1206 10:54:48.578983  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:48.578989  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:48.579060  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:48.604320  405191 cri.go:89] found id: ""
	I1206 10:54:48.604335  405191 logs.go:282] 0 containers: []
	W1206 10:54:48.604342  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:48.604347  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:48.604407  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:48.630562  405191 cri.go:89] found id: ""
	I1206 10:54:48.630575  405191 logs.go:282] 0 containers: []
	W1206 10:54:48.630583  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:48.630588  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:48.630645  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:48.659186  405191 cri.go:89] found id: ""
	I1206 10:54:48.659200  405191 logs.go:282] 0 containers: []
	W1206 10:54:48.659207  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:48.659218  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:48.659278  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:48.686349  405191 cri.go:89] found id: ""
	I1206 10:54:48.686363  405191 logs.go:282] 0 containers: []
	W1206 10:54:48.686371  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:48.686376  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:48.686433  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:48.712958  405191 cri.go:89] found id: ""
	I1206 10:54:48.712973  405191 logs.go:282] 0 containers: []
	W1206 10:54:48.712980  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:48.712985  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:48.713045  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:48.738763  405191 cri.go:89] found id: ""
	I1206 10:54:48.738777  405191 logs.go:282] 0 containers: []
	W1206 10:54:48.738783  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:48.738791  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:48.738801  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:48.753416  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:48.753431  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:48.818598  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:48.810121   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:48.810830   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:48.812598   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:48.813183   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:48.814760   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:48.810121   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:48.810830   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:48.812598   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:48.813183   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:48.814760   15846 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:48.818609  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:48.818620  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:48.888023  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:48.888043  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:48.917094  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:48.917110  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:51.485627  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:51.497092  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:51.497157  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:51.529254  405191 cri.go:89] found id: ""
	I1206 10:54:51.529268  405191 logs.go:282] 0 containers: []
	W1206 10:54:51.529275  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:51.529281  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:51.529340  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:51.555292  405191 cri.go:89] found id: ""
	I1206 10:54:51.555305  405191 logs.go:282] 0 containers: []
	W1206 10:54:51.555312  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:51.555316  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:51.555390  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:51.580443  405191 cri.go:89] found id: ""
	I1206 10:54:51.580458  405191 logs.go:282] 0 containers: []
	W1206 10:54:51.580465  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:51.580470  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:51.580529  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:51.605907  405191 cri.go:89] found id: ""
	I1206 10:54:51.605921  405191 logs.go:282] 0 containers: []
	W1206 10:54:51.605928  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:51.605933  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:51.605991  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:51.630731  405191 cri.go:89] found id: ""
	I1206 10:54:51.630745  405191 logs.go:282] 0 containers: []
	W1206 10:54:51.630752  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:51.630757  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:51.630816  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:51.655906  405191 cri.go:89] found id: ""
	I1206 10:54:51.655919  405191 logs.go:282] 0 containers: []
	W1206 10:54:51.655926  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:51.655931  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:51.655987  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:51.681242  405191 cri.go:89] found id: ""
	I1206 10:54:51.681256  405191 logs.go:282] 0 containers: []
	W1206 10:54:51.681267  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:51.681275  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:51.681285  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:51.750829  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:51.750849  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:51.766064  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:51.766080  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:51.831905  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:51.823637   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:51.824299   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:51.825840   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:51.826394   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:51.827960   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:51.823637   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:51.824299   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:51.825840   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:51.826394   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:51.827960   15956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:51.831915  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:51.831925  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:51.901462  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:51.901484  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:54.431319  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:54.441623  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:54.441686  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:54.470441  405191 cri.go:89] found id: ""
	I1206 10:54:54.470456  405191 logs.go:282] 0 containers: []
	W1206 10:54:54.470463  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:54.470469  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:54.470527  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:54.505844  405191 cri.go:89] found id: ""
	I1206 10:54:54.505858  405191 logs.go:282] 0 containers: []
	W1206 10:54:54.505865  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:54.505870  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:54.505931  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:54.540765  405191 cri.go:89] found id: ""
	I1206 10:54:54.540779  405191 logs.go:282] 0 containers: []
	W1206 10:54:54.540786  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:54.540791  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:54.540859  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:54.568534  405191 cri.go:89] found id: ""
	I1206 10:54:54.568559  405191 logs.go:282] 0 containers: []
	W1206 10:54:54.568566  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:54.568571  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:54.568631  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:54.598488  405191 cri.go:89] found id: ""
	I1206 10:54:54.598501  405191 logs.go:282] 0 containers: []
	W1206 10:54:54.598508  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:54.598513  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:54.598573  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:54.625601  405191 cri.go:89] found id: ""
	I1206 10:54:54.625615  405191 logs.go:282] 0 containers: []
	W1206 10:54:54.625622  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:54.625627  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:54.625684  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:54.651039  405191 cri.go:89] found id: ""
	I1206 10:54:54.651053  405191 logs.go:282] 0 containers: []
	W1206 10:54:54.651069  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:54.651077  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:54.651088  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:54.721711  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:54.712700   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:54.713574   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:54.715366   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:54.715761   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:54.717298   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:54.712700   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:54.713574   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:54.715366   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:54.715761   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:54.717298   16058 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:54.721724  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:54.721734  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:54.793778  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:54.793803  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:54:54.825565  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:54.825580  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:54.891107  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:54.891127  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:57.406177  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:54:57.416168  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:54:57.416231  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:54:57.444260  405191 cri.go:89] found id: ""
	I1206 10:54:57.444274  405191 logs.go:282] 0 containers: []
	W1206 10:54:57.444281  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:54:57.444286  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:54:57.444352  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:54:57.473921  405191 cri.go:89] found id: ""
	I1206 10:54:57.473935  405191 logs.go:282] 0 containers: []
	W1206 10:54:57.473942  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:54:57.473947  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:54:57.474006  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:54:57.507969  405191 cri.go:89] found id: ""
	I1206 10:54:57.507983  405191 logs.go:282] 0 containers: []
	W1206 10:54:57.507990  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:54:57.507995  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:54:57.508057  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:54:57.536405  405191 cri.go:89] found id: ""
	I1206 10:54:57.536420  405191 logs.go:282] 0 containers: []
	W1206 10:54:57.536428  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:54:57.536433  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:54:57.536502  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:54:57.564180  405191 cri.go:89] found id: ""
	I1206 10:54:57.564194  405191 logs.go:282] 0 containers: []
	W1206 10:54:57.564201  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:54:57.564206  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:54:57.564271  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:54:57.594665  405191 cri.go:89] found id: ""
	I1206 10:54:57.594679  405191 logs.go:282] 0 containers: []
	W1206 10:54:57.594687  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:54:57.594692  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:54:57.594751  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:54:57.627345  405191 cri.go:89] found id: ""
	I1206 10:54:57.627360  405191 logs.go:282] 0 containers: []
	W1206 10:54:57.627367  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:54:57.627398  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:54:57.627409  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:54:57.694026  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:54:57.694046  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:54:57.708621  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:54:57.708636  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:54:57.772743  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:54:57.764569   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:57.765305   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:57.766828   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:57.767291   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:57.768789   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:54:57.764569   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:57.765305   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:57.766828   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:57.767291   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:54:57.768789   16168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:54:57.772753  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:54:57.772764  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:54:57.841816  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:54:57.841836  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:00.375636  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:00.396560  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:00.396634  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:00.458455  405191 cri.go:89] found id: ""
	I1206 10:55:00.458471  405191 logs.go:282] 0 containers: []
	W1206 10:55:00.458479  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:00.458485  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:00.458553  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:00.497287  405191 cri.go:89] found id: ""
	I1206 10:55:00.497304  405191 logs.go:282] 0 containers: []
	W1206 10:55:00.497311  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:00.497317  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:00.497382  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:00.531076  405191 cri.go:89] found id: ""
	I1206 10:55:00.531092  405191 logs.go:282] 0 containers: []
	W1206 10:55:00.531099  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:00.531104  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:00.531172  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:00.567464  405191 cri.go:89] found id: ""
	I1206 10:55:00.567485  405191 logs.go:282] 0 containers: []
	W1206 10:55:00.567493  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:00.567499  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:00.567600  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:00.600497  405191 cri.go:89] found id: ""
	I1206 10:55:00.600512  405191 logs.go:282] 0 containers: []
	W1206 10:55:00.600520  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:00.600526  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:00.600596  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:00.648830  405191 cri.go:89] found id: ""
	I1206 10:55:00.648852  405191 logs.go:282] 0 containers: []
	W1206 10:55:00.648861  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:00.648868  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:00.648939  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:00.678773  405191 cri.go:89] found id: ""
	I1206 10:55:00.678789  405191 logs.go:282] 0 containers: []
	W1206 10:55:00.678797  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:00.678822  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:00.678834  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:00.748615  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:00.748637  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:00.764401  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:00.764420  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:00.836152  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:00.827231   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:00.828085   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:00.830005   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:00.830399   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:00.832026   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:00.827231   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:00.828085   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:00.830005   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:00.830399   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:00.832026   16271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:00.836163  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:00.836174  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:00.909732  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:00.909761  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:03.441095  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:03.451635  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:03.451701  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:03.486201  405191 cri.go:89] found id: ""
	I1206 10:55:03.486214  405191 logs.go:282] 0 containers: []
	W1206 10:55:03.486222  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:03.486226  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:03.486286  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:03.530153  405191 cri.go:89] found id: ""
	I1206 10:55:03.530167  405191 logs.go:282] 0 containers: []
	W1206 10:55:03.530174  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:03.530179  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:03.530243  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:03.559790  405191 cri.go:89] found id: ""
	I1206 10:55:03.559804  405191 logs.go:282] 0 containers: []
	W1206 10:55:03.559811  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:03.559816  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:03.559874  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:03.586392  405191 cri.go:89] found id: ""
	I1206 10:55:03.586406  405191 logs.go:282] 0 containers: []
	W1206 10:55:03.586413  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:03.586418  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:03.586477  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:03.612699  405191 cri.go:89] found id: ""
	I1206 10:55:03.612714  405191 logs.go:282] 0 containers: []
	W1206 10:55:03.612726  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:03.612732  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:03.612827  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:03.641895  405191 cri.go:89] found id: ""
	I1206 10:55:03.641909  405191 logs.go:282] 0 containers: []
	W1206 10:55:03.641916  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:03.641921  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:03.641978  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:03.668194  405191 cri.go:89] found id: ""
	I1206 10:55:03.668208  405191 logs.go:282] 0 containers: []
	W1206 10:55:03.668216  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:03.668224  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:03.668234  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:03.738567  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:03.738585  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:03.753715  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:03.753732  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:03.819356  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:03.811487   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:03.812006   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:03.813500   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:03.813921   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:03.815528   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:03.811487   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:03.812006   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:03.813500   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:03.813921   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:03.815528   16373 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:03.819368  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:03.819393  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:03.888845  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:03.888866  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:06.421279  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:06.431630  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:06.431691  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:06.457432  405191 cri.go:89] found id: ""
	I1206 10:55:06.457446  405191 logs.go:282] 0 containers: []
	W1206 10:55:06.457453  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:06.457458  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:06.457525  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:06.498897  405191 cri.go:89] found id: ""
	I1206 10:55:06.498911  405191 logs.go:282] 0 containers: []
	W1206 10:55:06.498918  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:06.498923  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:06.498994  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:06.532288  405191 cri.go:89] found id: ""
	I1206 10:55:06.532320  405191 logs.go:282] 0 containers: []
	W1206 10:55:06.532328  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:06.532332  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:06.532403  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:06.558737  405191 cri.go:89] found id: ""
	I1206 10:55:06.558751  405191 logs.go:282] 0 containers: []
	W1206 10:55:06.558758  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:06.558764  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:06.558835  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:06.588791  405191 cri.go:89] found id: ""
	I1206 10:55:06.588805  405191 logs.go:282] 0 containers: []
	W1206 10:55:06.588813  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:06.588818  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:06.588887  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:06.615097  405191 cri.go:89] found id: ""
	I1206 10:55:06.615110  405191 logs.go:282] 0 containers: []
	W1206 10:55:06.615117  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:06.615122  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:06.615182  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:06.640273  405191 cri.go:89] found id: ""
	I1206 10:55:06.640297  405191 logs.go:282] 0 containers: []
	W1206 10:55:06.640305  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:06.640312  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:06.640323  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:06.709781  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:06.709800  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:06.724307  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:06.724323  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:06.788894  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:06.780020   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:06.780621   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:06.782266   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:06.782823   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:06.784380   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:06.780020   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:06.780621   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:06.782266   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:06.782823   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:06.784380   16482 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:06.788903  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:06.788913  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:06.857942  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:06.857963  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:09.392819  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:09.402617  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:09.402675  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:09.429928  405191 cri.go:89] found id: ""
	I1206 10:55:09.429942  405191 logs.go:282] 0 containers: []
	W1206 10:55:09.429949  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:09.429955  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:09.430018  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:09.455893  405191 cri.go:89] found id: ""
	I1206 10:55:09.455907  405191 logs.go:282] 0 containers: []
	W1206 10:55:09.455913  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:09.455918  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:09.455975  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:09.492759  405191 cri.go:89] found id: ""
	I1206 10:55:09.492772  405191 logs.go:282] 0 containers: []
	W1206 10:55:09.492779  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:09.492784  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:09.492842  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:09.524405  405191 cri.go:89] found id: ""
	I1206 10:55:09.524418  405191 logs.go:282] 0 containers: []
	W1206 10:55:09.524425  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:09.524430  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:09.524488  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:09.555465  405191 cri.go:89] found id: ""
	I1206 10:55:09.555479  405191 logs.go:282] 0 containers: []
	W1206 10:55:09.555486  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:09.555491  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:09.555551  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:09.582561  405191 cri.go:89] found id: ""
	I1206 10:55:09.582575  405191 logs.go:282] 0 containers: []
	W1206 10:55:09.582582  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:09.582588  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:09.582646  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:09.608767  405191 cri.go:89] found id: ""
	I1206 10:55:09.608781  405191 logs.go:282] 0 containers: []
	W1206 10:55:09.608788  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:09.608796  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:09.608810  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:09.677518  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:09.677539  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:09.692935  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:09.692955  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:09.760066  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:09.750973   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:09.751783   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:09.753612   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:09.754387   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:09.755955   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:09.750973   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:09.751783   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:09.753612   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:09.754387   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:09.755955   16586 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:09.760077  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:09.760087  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:09.829605  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:09.829626  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:12.359607  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:12.370647  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:12.370708  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:12.402338  405191 cri.go:89] found id: ""
	I1206 10:55:12.402353  405191 logs.go:282] 0 containers: []
	W1206 10:55:12.402361  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:12.402366  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:12.402435  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:12.428498  405191 cri.go:89] found id: ""
	I1206 10:55:12.428513  405191 logs.go:282] 0 containers: []
	W1206 10:55:12.428520  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:12.428525  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:12.428587  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:12.454311  405191 cri.go:89] found id: ""
	I1206 10:55:12.454325  405191 logs.go:282] 0 containers: []
	W1206 10:55:12.454333  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:12.454338  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:12.454399  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:12.493402  405191 cri.go:89] found id: ""
	I1206 10:55:12.493416  405191 logs.go:282] 0 containers: []
	W1206 10:55:12.493423  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:12.493429  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:12.493487  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:12.527015  405191 cri.go:89] found id: ""
	I1206 10:55:12.527029  405191 logs.go:282] 0 containers: []
	W1206 10:55:12.527036  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:12.527042  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:12.527103  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:12.556788  405191 cri.go:89] found id: ""
	I1206 10:55:12.556812  405191 logs.go:282] 0 containers: []
	W1206 10:55:12.556820  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:12.556825  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:12.556897  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:12.584336  405191 cri.go:89] found id: ""
	I1206 10:55:12.584350  405191 logs.go:282] 0 containers: []
	W1206 10:55:12.584357  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:12.584365  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:12.584376  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:12.614039  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:12.614055  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:12.680316  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:12.680338  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:12.696525  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:12.696542  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:12.760110  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:12.751882   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:12.752591   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:12.754143   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:12.754484   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:12.756046   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:12.751882   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:12.752591   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:12.754143   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:12.754484   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:12.756046   16704 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:12.760120  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:12.760131  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:15.332168  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:15.342873  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:15.342950  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:15.371175  405191 cri.go:89] found id: ""
	I1206 10:55:15.371189  405191 logs.go:282] 0 containers: []
	W1206 10:55:15.371207  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:15.371212  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:15.371279  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:15.397085  405191 cri.go:89] found id: ""
	I1206 10:55:15.397100  405191 logs.go:282] 0 containers: []
	W1206 10:55:15.397107  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:15.397112  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:15.397171  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:15.422142  405191 cri.go:89] found id: ""
	I1206 10:55:15.422156  405191 logs.go:282] 0 containers: []
	W1206 10:55:15.422163  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:15.422174  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:15.422231  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:15.447127  405191 cri.go:89] found id: ""
	I1206 10:55:15.447141  405191 logs.go:282] 0 containers: []
	W1206 10:55:15.447148  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:15.447154  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:15.447212  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:15.477786  405191 cri.go:89] found id: ""
	I1206 10:55:15.477800  405191 logs.go:282] 0 containers: []
	W1206 10:55:15.477808  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:15.477813  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:15.477875  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:15.507270  405191 cri.go:89] found id: ""
	I1206 10:55:15.507285  405191 logs.go:282] 0 containers: []
	W1206 10:55:15.507292  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:15.507297  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:15.507360  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:15.536433  405191 cri.go:89] found id: ""
	I1206 10:55:15.536451  405191 logs.go:282] 0 containers: []
	W1206 10:55:15.536458  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:15.536470  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:15.536480  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:15.608040  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:15.608061  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:15.623617  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:15.623635  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:15.692548  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:15.684603   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:15.685140   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:15.686901   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:15.687564   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:15.688573   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:15.684603   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:15.685140   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:15.686901   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:15.687564   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:15.688573   16800 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:15.692558  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:15.692581  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:15.760517  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:15.760537  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:18.289173  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:18.300544  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:18.300610  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:18.327678  405191 cri.go:89] found id: ""
	I1206 10:55:18.327692  405191 logs.go:282] 0 containers: []
	W1206 10:55:18.327699  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:18.327704  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:18.327764  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:18.353999  405191 cri.go:89] found id: ""
	I1206 10:55:18.354014  405191 logs.go:282] 0 containers: []
	W1206 10:55:18.354021  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:18.354026  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:18.354084  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:18.382276  405191 cri.go:89] found id: ""
	I1206 10:55:18.382291  405191 logs.go:282] 0 containers: []
	W1206 10:55:18.382298  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:18.382304  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:18.382365  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:18.410827  405191 cri.go:89] found id: ""
	I1206 10:55:18.410841  405191 logs.go:282] 0 containers: []
	W1206 10:55:18.410847  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:18.410852  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:18.410911  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:18.436138  405191 cri.go:89] found id: ""
	I1206 10:55:18.436160  405191 logs.go:282] 0 containers: []
	W1206 10:55:18.436167  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:18.436172  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:18.436233  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:18.462254  405191 cri.go:89] found id: ""
	I1206 10:55:18.462269  405191 logs.go:282] 0 containers: []
	W1206 10:55:18.462276  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:18.462283  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:18.462346  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:18.492347  405191 cri.go:89] found id: ""
	I1206 10:55:18.492362  405191 logs.go:282] 0 containers: []
	W1206 10:55:18.492369  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:18.492377  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:18.492388  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:18.509956  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:18.509973  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:18.581031  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:18.572020   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:18.572812   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:18.573929   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:18.574912   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:18.575787   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:18.572020   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:18.572812   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:18.573929   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:18.574912   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:18.575787   16905 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:18.581041  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:18.581055  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:18.650942  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:18.650963  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:18.680668  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:18.680685  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:21.248379  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:21.258903  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:21.258982  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:21.286273  405191 cri.go:89] found id: ""
	I1206 10:55:21.286288  405191 logs.go:282] 0 containers: []
	W1206 10:55:21.286295  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:21.286300  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:21.286357  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:21.311824  405191 cri.go:89] found id: ""
	I1206 10:55:21.311841  405191 logs.go:282] 0 containers: []
	W1206 10:55:21.311851  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:21.311857  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:21.311923  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:21.338690  405191 cri.go:89] found id: ""
	I1206 10:55:21.338704  405191 logs.go:282] 0 containers: []
	W1206 10:55:21.338711  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:21.338716  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:21.338773  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:21.365841  405191 cri.go:89] found id: ""
	I1206 10:55:21.365855  405191 logs.go:282] 0 containers: []
	W1206 10:55:21.365862  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:21.365868  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:21.365926  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:21.396001  405191 cri.go:89] found id: ""
	I1206 10:55:21.396035  405191 logs.go:282] 0 containers: []
	W1206 10:55:21.396043  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:21.396049  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:21.396118  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:21.421823  405191 cri.go:89] found id: ""
	I1206 10:55:21.421837  405191 logs.go:282] 0 containers: []
	W1206 10:55:21.421856  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:21.421862  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:21.421934  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:21.449590  405191 cri.go:89] found id: ""
	I1206 10:55:21.449604  405191 logs.go:282] 0 containers: []
	W1206 10:55:21.449611  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:21.449619  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:21.449631  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:21.464618  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:21.464634  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:21.543901  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:21.526696   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:21.535561   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:21.536267   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:21.537985   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:21.538498   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:21.526696   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:21.535561   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:21.536267   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:21.537985   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:21.538498   17003 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:21.543913  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:21.543926  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:21.614646  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:21.614669  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:21.645809  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:21.645825  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:24.214037  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:24.226008  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:24.226071  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:24.252473  405191 cri.go:89] found id: ""
	I1206 10:55:24.252487  405191 logs.go:282] 0 containers: []
	W1206 10:55:24.252495  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:24.252500  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:24.252560  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:24.280242  405191 cri.go:89] found id: ""
	I1206 10:55:24.280256  405191 logs.go:282] 0 containers: []
	W1206 10:55:24.280263  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:24.280268  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:24.280328  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:24.307083  405191 cri.go:89] found id: ""
	I1206 10:55:24.307098  405191 logs.go:282] 0 containers: []
	W1206 10:55:24.307105  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:24.307111  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:24.307181  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:24.333215  405191 cri.go:89] found id: ""
	I1206 10:55:24.333230  405191 logs.go:282] 0 containers: []
	W1206 10:55:24.333239  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:24.333245  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:24.333312  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:24.364248  405191 cri.go:89] found id: ""
	I1206 10:55:24.364262  405191 logs.go:282] 0 containers: []
	W1206 10:55:24.364269  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:24.364275  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:24.364340  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:24.392539  405191 cri.go:89] found id: ""
	I1206 10:55:24.392554  405191 logs.go:282] 0 containers: []
	W1206 10:55:24.392561  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:24.392567  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:24.392631  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:24.419045  405191 cri.go:89] found id: ""
	I1206 10:55:24.419059  405191 logs.go:282] 0 containers: []
	W1206 10:55:24.419066  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:24.419074  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:24.419084  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:24.485101  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:24.485123  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:24.506235  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:24.506258  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:24.586208  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:24.577740   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:24.578227   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:24.579907   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:24.580253   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:24.581928   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:24.577740   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:24.578227   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:24.579907   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:24.580253   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:24.581928   17118 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:24.586218  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:24.586230  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:24.654219  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:24.654241  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:27.183198  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:27.194048  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:27.194116  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:27.223948  405191 cri.go:89] found id: ""
	I1206 10:55:27.223962  405191 logs.go:282] 0 containers: []
	W1206 10:55:27.223969  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:27.223974  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:27.224033  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:27.255792  405191 cri.go:89] found id: ""
	I1206 10:55:27.255807  405191 logs.go:282] 0 containers: []
	W1206 10:55:27.255814  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:27.255819  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:27.255882  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:27.285352  405191 cri.go:89] found id: ""
	I1206 10:55:27.285365  405191 logs.go:282] 0 containers: []
	W1206 10:55:27.285373  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:27.285380  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:27.285438  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:27.311572  405191 cri.go:89] found id: ""
	I1206 10:55:27.311599  405191 logs.go:282] 0 containers: []
	W1206 10:55:27.311606  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:27.311612  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:27.311684  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:27.337727  405191 cri.go:89] found id: ""
	I1206 10:55:27.337741  405191 logs.go:282] 0 containers: []
	W1206 10:55:27.337747  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:27.337753  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:27.337812  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:27.363513  405191 cri.go:89] found id: ""
	I1206 10:55:27.363527  405191 logs.go:282] 0 containers: []
	W1206 10:55:27.363534  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:27.363539  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:27.363611  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:27.390072  405191 cri.go:89] found id: ""
	I1206 10:55:27.390100  405191 logs.go:282] 0 containers: []
	W1206 10:55:27.390107  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:27.390115  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:27.390130  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:27.456548  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:27.456567  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:27.472626  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:27.472642  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:27.554055  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:27.545736   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:27.546254   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:27.548095   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:27.548446   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:27.550070   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:27.545736   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:27.546254   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:27.548095   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:27.548446   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:27.550070   17226 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:27.554065  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:27.554076  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:27.622961  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:27.622984  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:30.156731  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:30.168052  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 10:55:30.168115  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 10:55:30.195190  405191 cri.go:89] found id: ""
	I1206 10:55:30.195205  405191 logs.go:282] 0 containers: []
	W1206 10:55:30.195237  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 10:55:30.195243  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 10:55:30.195315  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 10:55:30.222581  405191 cri.go:89] found id: ""
	I1206 10:55:30.222615  405191 logs.go:282] 0 containers: []
	W1206 10:55:30.222622  405191 logs.go:284] No container was found matching "etcd"
	I1206 10:55:30.222628  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 10:55:30.222697  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 10:55:30.251144  405191 cri.go:89] found id: ""
	I1206 10:55:30.251162  405191 logs.go:282] 0 containers: []
	W1206 10:55:30.251173  405191 logs.go:284] No container was found matching "coredns"
	I1206 10:55:30.251178  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 10:55:30.251280  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 10:55:30.282704  405191 cri.go:89] found id: ""
	I1206 10:55:30.282731  405191 logs.go:282] 0 containers: []
	W1206 10:55:30.282739  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 10:55:30.282744  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 10:55:30.282818  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 10:55:30.308787  405191 cri.go:89] found id: ""
	I1206 10:55:30.308802  405191 logs.go:282] 0 containers: []
	W1206 10:55:30.308809  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 10:55:30.308814  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 10:55:30.308881  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 10:55:30.334479  405191 cri.go:89] found id: ""
	I1206 10:55:30.334494  405191 logs.go:282] 0 containers: []
	W1206 10:55:30.334501  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 10:55:30.334507  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 10:55:30.334582  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 10:55:30.361350  405191 cri.go:89] found id: ""
	I1206 10:55:30.361365  405191 logs.go:282] 0 containers: []
	W1206 10:55:30.361372  405191 logs.go:284] No container was found matching "kindnet"
	I1206 10:55:30.361380  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 10:55:30.361390  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 10:55:30.438089  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 10:55:30.438120  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 10:55:30.453200  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 10:55:30.453217  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 10:55:30.539250  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 10:55:30.524592   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:30.527641   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:30.528089   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:30.529752   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:30.530427   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 10:55:30.524592   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:30.527641   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:30.528089   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:30.529752   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 10:55:30.530427   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 10:55:30.539272  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 10:55:30.539285  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 10:55:30.610101  405191 logs.go:123] Gathering logs for container status ...
	I1206 10:55:30.610121  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 10:55:33.143484  405191 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 10:55:33.153906  405191 kubeadm.go:602] duration metric: took 4m2.63956924s to restartPrimaryControlPlane
	W1206 10:55:33.153970  405191 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1206 10:55:33.154044  405191 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1206 10:55:33.564051  405191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:55:33.577264  405191 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 10:55:33.585285  405191 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:55:33.585343  405191 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:55:33.593207  405191 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:55:33.593217  405191 kubeadm.go:158] found existing configuration files:
	
	I1206 10:55:33.593284  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:55:33.601281  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:55:33.601338  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:55:33.609078  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:55:33.617336  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:55:33.617395  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:55:33.625100  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:55:33.633096  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:55:33.633153  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:55:33.640767  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:55:33.648692  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:55:33.648783  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:55:33.656355  405191 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:55:33.695114  405191 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 10:55:33.695495  405191 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:55:33.776558  405191 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:55:33.776622  405191 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:55:33.776656  405191 kubeadm.go:319] OS: Linux
	I1206 10:55:33.776700  405191 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:55:33.776747  405191 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:55:33.776793  405191 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:55:33.776839  405191 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:55:33.776886  405191 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:55:33.776933  405191 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:55:33.776976  405191 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:55:33.777023  405191 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:55:33.777067  405191 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:55:33.839562  405191 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:55:33.839700  405191 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:55:33.839825  405191 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:55:33.847872  405191 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:55:33.851528  405191 out.go:252]   - Generating certificates and keys ...
	I1206 10:55:33.851642  405191 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:55:33.851732  405191 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:55:33.851823  405191 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 10:55:33.851888  405191 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 10:55:33.851963  405191 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 10:55:33.852020  405191 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 10:55:33.852092  405191 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 10:55:33.852157  405191 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 10:55:33.852236  405191 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 10:55:33.852314  405191 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 10:55:33.852354  405191 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 10:55:33.852412  405191 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:55:34.131310  405191 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:55:34.288855  405191 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:55:34.553487  405191 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:55:35.148231  405191 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:55:35.211116  405191 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:55:35.211864  405191 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:55:35.214714  405191 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:55:35.218231  405191 out.go:252]   - Booting up control plane ...
	I1206 10:55:35.218330  405191 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:55:35.218406  405191 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:55:35.218472  405191 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:55:35.235870  405191 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:55:35.235976  405191 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:55:35.244902  405191 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:55:35.245320  405191 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:55:35.245379  405191 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:55:35.375634  405191 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:55:35.375747  405191 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 10:59:35.374512  405191 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000270227s
	I1206 10:59:35.374544  405191 kubeadm.go:319] 
	I1206 10:59:35.374605  405191 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 10:59:35.374643  405191 kubeadm.go:319] 	- The kubelet is not running
	I1206 10:59:35.374758  405191 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 10:59:35.374763  405191 kubeadm.go:319] 
	I1206 10:59:35.374876  405191 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 10:59:35.374910  405191 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 10:59:35.374942  405191 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 10:59:35.374945  405191 kubeadm.go:319] 
	I1206 10:59:35.380563  405191 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 10:59:35.380998  405191 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 10:59:35.381115  405191 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 10:59:35.381348  405191 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 10:59:35.381353  405191 kubeadm.go:319] 
	I1206 10:59:35.381420  405191 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1206 10:59:35.381523  405191 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000270227s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1206 10:59:35.381613  405191 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1206 10:59:35.796714  405191 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 10:59:35.809334  405191 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 10:59:35.809388  405191 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 10:59:35.817444  405191 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 10:59:35.817452  405191 kubeadm.go:158] found existing configuration files:
	
	I1206 10:59:35.817502  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1206 10:59:35.825442  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 10:59:35.825501  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 10:59:35.833082  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1206 10:59:35.842093  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 10:59:35.842159  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 10:59:35.851759  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1206 10:59:35.860099  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 10:59:35.860161  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 10:59:35.867900  405191 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1206 10:59:35.876130  405191 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 10:59:35.876188  405191 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 10:59:35.884013  405191 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 10:59:35.926383  405191 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 10:59:35.926438  405191 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 10:59:36.016832  405191 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 10:59:36.016925  405191 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 10:59:36.016974  405191 kubeadm.go:319] OS: Linux
	I1206 10:59:36.017019  405191 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 10:59:36.017071  405191 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 10:59:36.017119  405191 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 10:59:36.017173  405191 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 10:59:36.017220  405191 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 10:59:36.017277  405191 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 10:59:36.017339  405191 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 10:59:36.017401  405191 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 10:59:36.017447  405191 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 10:59:36.080832  405191 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 10:59:36.080951  405191 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 10:59:36.081048  405191 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 10:59:36.091906  405191 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 10:59:36.097223  405191 out.go:252]   - Generating certificates and keys ...
	I1206 10:59:36.097345  405191 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 10:59:36.097426  405191 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 10:59:36.097511  405191 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 10:59:36.097596  405191 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 10:59:36.097675  405191 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 10:59:36.097750  405191 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 10:59:36.097815  405191 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 10:59:36.097876  405191 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 10:59:36.097954  405191 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 10:59:36.098026  405191 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 10:59:36.098063  405191 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 10:59:36.098122  405191 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 10:59:36.705762  405191 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 10:59:36.885173  405191 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 10:59:37.204953  405191 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 10:59:37.715956  405191 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 10:59:37.848965  405191 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 10:59:37.849735  405191 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 10:59:37.853600  405191 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 10:59:37.856590  405191 out.go:252]   - Booting up control plane ...
	I1206 10:59:37.856698  405191 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 10:59:37.856819  405191 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 10:59:37.858671  405191 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 10:59:37.873039  405191 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 10:59:37.873143  405191 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 10:59:37.880838  405191 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 10:59:37.881129  405191 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 10:59:37.881370  405191 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 10:59:38.015956  405191 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 10:59:38.016070  405191 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 11:03:38.011572  405191 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000448393s
	I1206 11:03:38.011605  405191 kubeadm.go:319] 
	I1206 11:03:38.011721  405191 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 11:03:38.011777  405191 kubeadm.go:319] 	- The kubelet is not running
	I1206 11:03:38.012051  405191 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 11:03:38.012060  405191 kubeadm.go:319] 
	I1206 11:03:38.012421  405191 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 11:03:38.012573  405191 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 11:03:38.012628  405191 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 11:03:38.012633  405191 kubeadm.go:319] 
	I1206 11:03:38.018189  405191 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 11:03:38.018608  405191 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 11:03:38.018716  405191 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 11:03:38.018960  405191 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 11:03:38.018965  405191 kubeadm.go:319] 
	I1206 11:03:38.019033  405191 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 11:03:38.019089  405191 kubeadm.go:403] duration metric: took 12m7.551905569s to StartCluster
	I1206 11:03:38.019121  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:03:38.019191  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:03:38.048894  405191 cri.go:89] found id: ""
	I1206 11:03:38.048909  405191 logs.go:282] 0 containers: []
	W1206 11:03:38.048917  405191 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:03:38.048922  405191 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:03:38.049009  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:03:38.077125  405191 cri.go:89] found id: ""
	I1206 11:03:38.077141  405191 logs.go:282] 0 containers: []
	W1206 11:03:38.077149  405191 logs.go:284] No container was found matching "etcd"
	I1206 11:03:38.077154  405191 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:03:38.077229  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:03:38.104859  405191 cri.go:89] found id: ""
	I1206 11:03:38.104873  405191 logs.go:282] 0 containers: []
	W1206 11:03:38.104881  405191 logs.go:284] No container was found matching "coredns"
	I1206 11:03:38.104886  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:03:38.104946  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:03:38.131268  405191 cri.go:89] found id: ""
	I1206 11:03:38.131282  405191 logs.go:282] 0 containers: []
	W1206 11:03:38.131289  405191 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:03:38.131295  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:03:38.131356  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:03:38.161469  405191 cri.go:89] found id: ""
	I1206 11:03:38.161483  405191 logs.go:282] 0 containers: []
	W1206 11:03:38.161490  405191 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:03:38.161495  405191 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:03:38.161555  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:03:38.191440  405191 cri.go:89] found id: ""
	I1206 11:03:38.191454  405191 logs.go:282] 0 containers: []
	W1206 11:03:38.191461  405191 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:03:38.191467  405191 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:03:38.191536  405191 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:03:38.219921  405191 cri.go:89] found id: ""
	I1206 11:03:38.219935  405191 logs.go:282] 0 containers: []
	W1206 11:03:38.219943  405191 logs.go:284] No container was found matching "kindnet"
	I1206 11:03:38.219951  405191 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:03:38.219962  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:03:38.285137  405191 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:03:38.277076   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:38.277519   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:38.279007   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:38.279647   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:38.281164   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1206 11:03:38.277076   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:38.277519   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:38.279007   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:38.279647   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:38.281164   21116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:03:38.285157  405191 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:03:38.285169  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:03:38.355235  405191 logs.go:123] Gathering logs for container status ...
	I1206 11:03:38.355259  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:03:38.391661  405191 logs.go:123] Gathering logs for kubelet ...
	I1206 11:03:38.391679  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:03:38.462714  405191 logs.go:123] Gathering logs for dmesg ...
	I1206 11:03:38.462733  405191 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1206 11:03:38.480853  405191 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000448393s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 11:03:38.480894  405191 out.go:285] * 
	W1206 11:03:38.480951  405191 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000448393s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 11:03:38.480964  405191 out.go:285] * 
	W1206 11:03:38.483093  405191 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 11:03:38.488282  405191 out.go:203] 
	W1206 11:03:38.491978  405191 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000448393s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 11:03:38.492089  405191 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 11:03:38.492161  405191 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 11:03:38.495164  405191 out.go:203] 
	
	
	==> CRI-O <==
	Dec 06 10:51:28 functional-196950 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 06 10:55:33 functional-196950 crio[9931]: time="2025-12-06T10:55:33.843321747Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=5f168690-0479-4b67-8846-d623c54570c0 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:55:33 functional-196950 crio[9931]: time="2025-12-06T10:55:33.844283422Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=5ac9537d-0142-4bb1-b0d9-019b296bd707 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:55:33 functional-196950 crio[9931]: time="2025-12-06T10:55:33.844830562Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=29c4c588-f2de-4c84-8064-807353d6179d name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:55:33 functional-196950 crio[9931]: time="2025-12-06T10:55:33.845343116Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=1da192f7-cfcc-42b9-837a-9f285f929dcd name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:55:33 functional-196950 crio[9931]: time="2025-12-06T10:55:33.845798415Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=320d875b-b045-44db-aacf-14add6cc927b name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:55:33 functional-196950 crio[9931]: time="2025-12-06T10:55:33.846245664Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=dd8834be-d268-43d9-854b-d34b54047169 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:55:33 functional-196950 crio[9931]: time="2025-12-06T10:55:33.846769058Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=25869f1e-412e-46d2-b706-063f94749122 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.084499828Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=f0fd0946-b323-435f-946c-e412850eb9c0 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.085495997Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=34b6bb47-44a4-4780-9567-04c497973fa7 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.08608111Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=e26253f3-5094-4fbe-b6d1-306f2e31fa9a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.086661177Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=f8a4ffa3-a2d3-4c05-ba97-fd167ad1ff4e name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.087187984Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=43819321-fbd2-4155-9f0b-c716c27fc9ce name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.087957288Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=aeb2085e-c4e7-4d42-9049-5042f515cbdb name=/runtime.v1.ImageService/ImageStatus
	Dec 06 10:59:36 functional-196950 crio[9931]: time="2025-12-06T10:59:36.088481658Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=0f01a640-f6a6-41dd-afc3-f5cae208f89a name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.079725515Z" level=info msg="Checking image status: kicbase/echo-server:functional-196950" id=f87ee22d-4408-46c2-8930-0ab8ba7ffa52 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.079908966Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.079954112Z" level=info msg="Image kicbase/echo-server:functional-196950 not found" id=f87ee22d-4408-46c2-8930-0ab8ba7ffa52 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.080016438Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-196950 found" id=f87ee22d-4408-46c2-8930-0ab8ba7ffa52 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.114067912Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-196950" id=da9dda5e-17fc-43e5-a93c-9057adb4fa98 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.114216122Z" level=info msg="Image docker.io/kicbase/echo-server:functional-196950 not found" id=da9dda5e-17fc-43e5-a93c-9057adb4fa98 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.114253587Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-196950 found" id=da9dda5e-17fc-43e5-a93c-9057adb4fa98 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.143186327Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-196950" id=629f3d91-fd66-4652-88b6-cbe010464984 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.143320859Z" level=info msg="Image localhost/kicbase/echo-server:functional-196950 not found" id=629f3d91-fd66-4652-88b6-cbe010464984 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:03:48 functional-196950 crio[9931]: time="2025-12-06T11:03:48.143363608Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-196950 found" id=629f3d91-fd66-4652-88b6-cbe010464984 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1206 11:03:49.429922   21907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:49.430691   21907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:49.432363   21907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:49.432944   21907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1206 11:03:49.434540   21907 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 6 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014752] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503231] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.065820] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.901896] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.944603] kauditd_printk_skb: 39 callbacks suppressed
	[Dec 6 09:04] hrtimer: interrupt took 32230230 ns
	[Dec 6 10:25] kauditd_printk_skb: 8 callbacks suppressed
	[Dec 6 10:26] overlayfs: idmapped layers are currently not supported
	[  +0.066821] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec 6 10:32] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:33] overlayfs: idmapped layers are currently not supported
	[Dec 6 10:51] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 11:03:49 up  2:46,  0 user,  load average: 0.65, 0.30, 0.52
	Linux functional-196950 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 11:03:46 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:03:47 functional-196950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1814.
	Dec 06 11:03:47 functional-196950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:03:47 functional-196950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:03:47 functional-196950 kubelet[21715]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:03:47 functional-196950 kubelet[21715]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:03:47 functional-196950 kubelet[21715]: E1206 11:03:47.584956   21715 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:03:47 functional-196950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:03:47 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:03:48 functional-196950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1815.
	Dec 06 11:03:48 functional-196950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:03:48 functional-196950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:03:48 functional-196950 kubelet[21789]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:03:48 functional-196950 kubelet[21789]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:03:48 functional-196950 kubelet[21789]: E1206 11:03:48.322497   21789 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:03:48 functional-196950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:03:48 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:03:48 functional-196950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1816.
	Dec 06 11:03:48 functional-196950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:03:48 functional-196950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:03:49 functional-196950 kubelet[21841]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:03:49 functional-196950 kubelet[21841]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:03:49 functional-196950 kubelet[21841]: E1206 11:03:49.076141   21841 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:03:49 functional-196950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:03:49 functional-196950 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-196950 -n functional-196950
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-196950 -n functional-196950: exit status 2 (443.662799ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "functional-196950" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (3.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-196950 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-196950 create deployment hello-node --image kicbase/echo-server: exit status 1 (76.60056ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-196950 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-196950 service list: exit status 103 (355.349613ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-196950 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-196950"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-arm64 -p functional-196950 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-196950 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-196950\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-196950 service list -o json: exit status 103 (360.360736ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-196950 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-196950"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-arm64 -p functional-196950 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-196950 service --namespace=default --https --url hello-node: exit status 103 (368.893393ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-196950 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-196950"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-196950 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-196950 service hello-node --url --format={{.IP}}: exit status 103 (315.144048ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-196950 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-196950"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-196950 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-196950 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-196950\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-196950 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-196950 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1206 11:03:54.626596  420180 out.go:360] Setting OutFile to fd 1 ...
I1206 11:03:54.626852  420180 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 11:03:54.626882  420180 out.go:374] Setting ErrFile to fd 2...
I1206 11:03:54.626903  420180 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 11:03:54.627225  420180 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
I1206 11:03:54.627597  420180 mustload.go:66] Loading cluster: functional-196950
I1206 11:03:54.628141  420180 config.go:182] Loaded profile config "functional-196950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 11:03:54.628684  420180 cli_runner.go:164] Run: docker container inspect functional-196950 --format={{.State.Status}}
I1206 11:03:54.653105  420180 host.go:66] Checking if "functional-196950" exists ...
I1206 11:03:54.653490  420180 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1206 11:03:54.772768  420180 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:03:54.740802768 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1206 11:03:54.772888  420180 api_server.go:166] Checking apiserver status ...
I1206 11:03:54.772949  420180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1206 11:03:54.772991  420180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
I1206 11:03:54.808273  420180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
W1206 11:03:54.927957  420180 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1206 11:03:54.931247  420180 out.go:179] * The control-plane node functional-196950 apiserver is not running: (state=Stopped)
I1206 11:03:54.934448  420180 out.go:179]   To start a cluster, run: "minikube start -p functional-196950"

                                                
                                                
stdout: * The control-plane node functional-196950 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-196950"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-196950 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 420181: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-196950 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-196950 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-196950 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-196950 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-196950 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-196950 service hello-node --url: exit status 103 (444.042543ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-196950 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-196950"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-196950 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-196950 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-196950"
functional_test.go:1579: failed to parse "* The control-plane node functional-196950 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-196950\"": parse "* The control-plane node functional-196950 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-196950\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-196950 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-196950 apply -f testdata/testsvc.yaml: exit status 1 (100.5772ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-196950 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (93s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://10.103.244.119": Temporary Error: Get "http://10.103.244.119": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-196950 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-196950 get svc nginx-svc: exit status 1 (63.904056ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-196950 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (93.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-196950 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo46525363/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765019135320467161" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo46525363/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765019135320467161" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo46525363/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765019135320467161" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo46525363/001/test-1765019135320467161
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-196950 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (333.874479ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 11:05:35.654636  364855 retry.go:31] will retry after 440.226938ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  6 11:05 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  6 11:05 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  6 11:05 test-1765019135320467161
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 ssh cat /mount-9p/test-1765019135320467161
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-196950 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-196950 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (72.041148ms)

                                                
                                                
** stderr ** 
	error: error when deleting "testdata/busybox-mount-test.yaml": Delete "https://192.168.49.2:8441/api/v1/namespaces/default/pods/busybox-mount": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-196950 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-196950 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (282.528627ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=36677)
	total 2
	-rw-r--r-- 1 docker docker 24 Dec  6 11:05 created-by-test
	-rw-r--r-- 1 docker docker 24 Dec  6 11:05 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Dec  6 11:05 test-1765019135320467161
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-arm64 -p functional-196950 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-196950 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo46525363/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-196950 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo46525363/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo46525363/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:36677
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo46525363/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-196950 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo46525363/001:/mount-9p --alsologtostderr -v=1] stderr:
I1206 11:05:35.386569  422526 out.go:360] Setting OutFile to fd 1 ...
I1206 11:05:35.386752  422526 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 11:05:35.386760  422526 out.go:374] Setting ErrFile to fd 2...
I1206 11:05:35.386765  422526 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 11:05:35.387030  422526 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
I1206 11:05:35.387282  422526 mustload.go:66] Loading cluster: functional-196950
I1206 11:05:35.387701  422526 config.go:182] Loaded profile config "functional-196950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 11:05:35.388203  422526 cli_runner.go:164] Run: docker container inspect functional-196950 --format={{.State.Status}}
I1206 11:05:35.406843  422526 host.go:66] Checking if "functional-196950" exists ...
I1206 11:05:35.407197  422526 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1206 11:05:35.487010  422526 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:05:35.47661996 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:
/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1206 11:05:35.487166  422526 cli_runner.go:164] Run: docker network inspect functional-196950 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1206 11:05:35.520779  422526 out.go:179] * Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo46525363/001 into VM as /mount-9p ...
I1206 11:05:35.523865  422526 out.go:179]   - Mount type:   9p
I1206 11:05:35.526723  422526 out.go:179]   - User ID:      docker
I1206 11:05:35.529781  422526 out.go:179]   - Group ID:     docker
I1206 11:05:35.532557  422526 out.go:179]   - Version:      9p2000.L
I1206 11:05:35.535270  422526 out.go:179]   - Message Size: 262144
I1206 11:05:35.538055  422526 out.go:179]   - Options:      map[]
I1206 11:05:35.540810  422526 out.go:179]   - Bind Address: 192.168.49.1:36677
I1206 11:05:35.543650  422526 out.go:179] * Userspace file server: 
I1206 11:05:35.544171  422526 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1206 11:05:35.544298  422526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
I1206 11:05:35.569850  422526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
I1206 11:05:35.674315  422526 mount.go:180] unmount for /mount-9p ran successfully
I1206 11:05:35.674352  422526 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1206 11:05:35.683153  422526 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=36677,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1206 11:05:35.694549  422526 main.go:127] stdlog: ufs.go:141 connected
I1206 11:05:35.694724  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Tversion tag 65535 msize 262144 version '9P2000.L'
I1206 11:05:35.694775  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rversion tag 65535 msize 262144 version '9P2000'
I1206 11:05:35.695008  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1206 11:05:35.695072  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rattach tag 0 aqid (4431a f356ad55 'd')
I1206 11:05:35.697128  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Tstat tag 0 fid 0
I1206 11:05:35.697217  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (4431a f356ad55 'd') m d775 at 0 mt 1765019135 l 4096 t 0 d 0 ext )
I1206 11:05:35.698967  422526 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/.mount-process: {Name:mk4738f5b3dbf53bce671ea5a02250aae6a9dcf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 11:05:35.699159  422526 mount.go:105] mount successful: ""
I1206 11:05:35.702590  422526 out.go:179] * Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo46525363/001 to /mount-9p
I1206 11:05:35.705562  422526 out.go:203] 
I1206 11:05:35.708542  422526 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1206 11:05:36.655913  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Tstat tag 0 fid 0
I1206 11:05:36.655989  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (4431a f356ad55 'd') m d775 at 0 mt 1765019135 l 4096 t 0 d 0 ext )
I1206 11:05:36.656355  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Twalk tag 0 fid 0 newfid 1 
I1206 11:05:36.656394  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rwalk tag 0 
I1206 11:05:36.656522  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Topen tag 0 fid 1 mode 0
I1206 11:05:36.656574  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Ropen tag 0 qid (4431a f356ad55 'd') iounit 0
I1206 11:05:36.656732  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Tstat tag 0 fid 0
I1206 11:05:36.656790  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (4431a f356ad55 'd') m d775 at 0 mt 1765019135 l 4096 t 0 d 0 ext )
I1206 11:05:36.656951  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Tread tag 0 fid 1 offset 0 count 262120
I1206 11:05:36.657070  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rread tag 0 count 258
I1206 11:05:36.657211  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Tread tag 0 fid 1 offset 258 count 261862
I1206 11:05:36.657251  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rread tag 0 count 0
I1206 11:05:36.657388  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Tread tag 0 fid 1 offset 258 count 262120
I1206 11:05:36.657413  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rread tag 0 count 0
I1206 11:05:36.657560  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1206 11:05:36.657596  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rwalk tag 0 (4431d f356ad55 '') 
I1206 11:05:36.657719  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Tstat tag 0 fid 2
I1206 11:05:36.657760  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (4431d f356ad55 '') m 644 at 0 mt 1765019135 l 24 t 0 d 0 ext )
I1206 11:05:36.657891  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Tstat tag 0 fid 2
I1206 11:05:36.657941  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (4431d f356ad55 '') m 644 at 0 mt 1765019135 l 24 t 0 d 0 ext )
I1206 11:05:36.658081  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Tclunk tag 0 fid 2
I1206 11:05:36.658124  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rclunk tag 0
I1206 11:05:36.658247  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Twalk tag 0 fid 0 newfid 2 0:'test-1765019135320467161' 
I1206 11:05:36.658288  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rwalk tag 0 (4431f f356ad55 '') 
I1206 11:05:36.658428  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Tstat tag 0 fid 2
I1206 11:05:36.658467  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rstat tag 0 st ('test-1765019135320467161' 'jenkins' 'jenkins' '' q (4431f f356ad55 '') m 644 at 0 mt 1765019135 l 24 t 0 d 0 ext )
I1206 11:05:36.658589  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Tstat tag 0 fid 2
I1206 11:05:36.658627  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rstat tag 0 st ('test-1765019135320467161' 'jenkins' 'jenkins' '' q (4431f f356ad55 '') m 644 at 0 mt 1765019135 l 24 t 0 d 0 ext )
I1206 11:05:36.658756  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Tclunk tag 0 fid 2
I1206 11:05:36.658787  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rclunk tag 0
I1206 11:05:36.658918  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1206 11:05:36.658955  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rwalk tag 0 (4431e f356ad55 '') 
I1206 11:05:36.659082  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Tstat tag 0 fid 2
I1206 11:05:36.659122  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (4431e f356ad55 '') m 644 at 0 mt 1765019135 l 24 t 0 d 0 ext )
I1206 11:05:36.659252  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Tstat tag 0 fid 2
I1206 11:05:36.659292  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (4431e f356ad55 '') m 644 at 0 mt 1765019135 l 24 t 0 d 0 ext )
I1206 11:05:36.659442  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Tclunk tag 0 fid 2
I1206 11:05:36.659466  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rclunk tag 0
I1206 11:05:36.659595  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Tread tag 0 fid 1 offset 258 count 262120
I1206 11:05:36.659623  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rread tag 0 count 0
I1206 11:05:36.659756  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Tclunk tag 0 fid 1
I1206 11:05:36.659785  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rclunk tag 0
I1206 11:05:36.927611  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Twalk tag 0 fid 0 newfid 1 0:'test-1765019135320467161' 
I1206 11:05:36.927682  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rwalk tag 0 (4431f f356ad55 '') 
I1206 11:05:36.927850  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Tstat tag 0 fid 1
I1206 11:05:36.927895  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rstat tag 0 st ('test-1765019135320467161' 'jenkins' 'jenkins' '' q (4431f f356ad55 '') m 644 at 0 mt 1765019135 l 24 t 0 d 0 ext )
I1206 11:05:36.928054  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Twalk tag 0 fid 1 newfid 2 
I1206 11:05:36.928086  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rwalk tag 0 
I1206 11:05:36.928222  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Topen tag 0 fid 2 mode 0
I1206 11:05:36.928303  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Ropen tag 0 qid (4431f f356ad55 '') iounit 0
I1206 11:05:36.928446  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Tstat tag 0 fid 1
I1206 11:05:36.928487  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rstat tag 0 st ('test-1765019135320467161' 'jenkins' 'jenkins' '' q (4431f f356ad55 '') m 644 at 0 mt 1765019135 l 24 t 0 d 0 ext )
I1206 11:05:36.928630  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Tread tag 0 fid 2 offset 0 count 262120
I1206 11:05:36.928679  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rread tag 0 count 24
I1206 11:05:36.928803  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Tread tag 0 fid 2 offset 24 count 262120
I1206 11:05:36.928841  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rread tag 0 count 0
I1206 11:05:36.928983  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Tread tag 0 fid 2 offset 24 count 262120
I1206 11:05:36.929020  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rread tag 0 count 0
I1206 11:05:36.929179  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Tclunk tag 0 fid 2
I1206 11:05:36.929229  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rclunk tag 0
I1206 11:05:36.929449  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Tclunk tag 0 fid 1
I1206 11:05:36.929477  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rclunk tag 0
I1206 11:05:37.286287  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Tstat tag 0 fid 0
I1206 11:05:37.286361  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (4431a f356ad55 'd') m d775 at 0 mt 1765019135 l 4096 t 0 d 0 ext )
I1206 11:05:37.286738  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Twalk tag 0 fid 0 newfid 1 
I1206 11:05:37.286786  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rwalk tag 0 
I1206 11:05:37.286948  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Topen tag 0 fid 1 mode 0
I1206 11:05:37.287010  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Ropen tag 0 qid (4431a f356ad55 'd') iounit 0
I1206 11:05:37.287129  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Tstat tag 0 fid 0
I1206 11:05:37.287176  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (4431a f356ad55 'd') m d775 at 0 mt 1765019135 l 4096 t 0 d 0 ext )
I1206 11:05:37.287350  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Tread tag 0 fid 1 offset 0 count 262120
I1206 11:05:37.287480  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rread tag 0 count 258
I1206 11:05:37.287629  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Tread tag 0 fid 1 offset 258 count 261862
I1206 11:05:37.287663  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rread tag 0 count 0
I1206 11:05:37.287800  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Tread tag 0 fid 1 offset 258 count 262120
I1206 11:05:37.287828  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rread tag 0 count 0
I1206 11:05:37.287964  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1206 11:05:37.288000  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rwalk tag 0 (4431d f356ad55 '') 
I1206 11:05:37.288132  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Tstat tag 0 fid 2
I1206 11:05:37.288177  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (4431d f356ad55 '') m 644 at 0 mt 1765019135 l 24 t 0 d 0 ext )
I1206 11:05:37.288312  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Tstat tag 0 fid 2
I1206 11:05:37.288359  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (4431d f356ad55 '') m 644 at 0 mt 1765019135 l 24 t 0 d 0 ext )
I1206 11:05:37.288495  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Tclunk tag 0 fid 2
I1206 11:05:37.288519  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rclunk tag 0
I1206 11:05:37.288652  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Twalk tag 0 fid 0 newfid 2 0:'test-1765019135320467161' 
I1206 11:05:37.288689  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rwalk tag 0 (4431f f356ad55 '') 
I1206 11:05:37.288823  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Tstat tag 0 fid 2
I1206 11:05:37.288859  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rstat tag 0 st ('test-1765019135320467161' 'jenkins' 'jenkins' '' q (4431f f356ad55 '') m 644 at 0 mt 1765019135 l 24 t 0 d 0 ext )
I1206 11:05:37.288996  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Tstat tag 0 fid 2
I1206 11:05:37.289031  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rstat tag 0 st ('test-1765019135320467161' 'jenkins' 'jenkins' '' q (4431f f356ad55 '') m 644 at 0 mt 1765019135 l 24 t 0 d 0 ext )
I1206 11:05:37.289163  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Tclunk tag 0 fid 2
I1206 11:05:37.289186  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rclunk tag 0
I1206 11:05:37.289323  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1206 11:05:37.289359  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rwalk tag 0 (4431e f356ad55 '') 
I1206 11:05:37.289493  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Tstat tag 0 fid 2
I1206 11:05:37.289528  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (4431e f356ad55 '') m 644 at 0 mt 1765019135 l 24 t 0 d 0 ext )
I1206 11:05:37.289650  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Tstat tag 0 fid 2
I1206 11:05:37.289687  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (4431e f356ad55 '') m 644 at 0 mt 1765019135 l 24 t 0 d 0 ext )
I1206 11:05:37.289824  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Tclunk tag 0 fid 2
I1206 11:05:37.289847  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rclunk tag 0
I1206 11:05:37.289972  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Tread tag 0 fid 1 offset 258 count 262120
I1206 11:05:37.289999  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rread tag 0 count 0
I1206 11:05:37.290159  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Tclunk tag 0 fid 1
I1206 11:05:37.290191  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rclunk tag 0
I1206 11:05:37.291550  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1206 11:05:37.291621  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rerror tag 0 ename 'file not found' ecode 0
I1206 11:05:37.561152  422526 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:53146 Tclunk tag 0 fid 0
I1206 11:05:37.561203  422526 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:53146 Rclunk tag 0
I1206 11:05:37.562323  422526 main.go:127] stdlog: ufs.go:147 disconnected
I1206 11:05:37.585027  422526 out.go:179] * Unmounting /mount-9p ...
I1206 11:05:37.588026  422526 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1206 11:05:37.595366  422526 mount.go:180] unmount for /mount-9p ran successfully
I1206 11:05:37.595531  422526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/.mount-process: {Name:mk4738f5b3dbf53bce671ea5a02250aae6a9dcf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1206 11:05:37.598675  422526 out.go:203] 
W1206 11:05:37.601707  422526 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1206 11:05:37.604866  422526 out.go:203] 
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.36s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-250118 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-250118 --output=json --user=testUser: exit status 80 (1.76201254s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"13839902-59e5-4bef-8b10-816453087611","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-250118 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"9a8528bb-1156-4901-a4f1-0c5dbcf8497c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-06T11:20:53Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"fd67daa5-fd9d-4a29-ad5c-22fff4c9986d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-250118 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.76s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.84s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-250118 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-250118 --output=json --user=testUser: exit status 80 (1.844309026s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5e2164cb-05e7-4197-adc7-12b27f9e7767","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-250118 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"6f3cfe3f-415a-4971-b7f1-db395730238b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-06T11:20:55Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"322979fc-8bd0-4d18-b3da-4ab67a42100c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-250118 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.84s)

                                                
                                    
x
+
TestKubernetesUpgrade (793.87s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-432995 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-432995 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (41.478501499s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-432995
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-432995: (1.430924915s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-432995 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-432995 status --format={{.Host}}: exit status 7 (87.240544ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-432995 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1206 11:38:55.064890  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-432995 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 109 (12m25.696853862s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-432995] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-362985/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-362985/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-432995" primary control-plane node in "kubernetes-upgrade-432995" cluster
	* Pulling base image v0.0.48-1764843390-22032 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 11:38:47.825932  544774 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:38:47.826573  544774 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:38:47.826613  544774 out.go:374] Setting ErrFile to fd 2...
	I1206 11:38:47.826637  544774 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:38:47.826953  544774 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 11:38:47.827400  544774 out.go:368] Setting JSON to false
	I1206 11:38:47.828386  544774 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":12079,"bootTime":1765009049,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 11:38:47.828486  544774 start.go:143] virtualization:  
	I1206 11:38:47.833501  544774 out.go:179] * [kubernetes-upgrade-432995] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 11:38:47.836655  544774 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 11:38:47.836741  544774 notify.go:221] Checking for updates...
	I1206 11:38:47.840910  544774 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 11:38:47.843928  544774 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 11:38:47.846977  544774 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-362985/.minikube
	I1206 11:38:47.849841  544774 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 11:38:47.852694  544774 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 11:38:47.856084  544774 config.go:182] Loaded profile config "kubernetes-upgrade-432995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1206 11:38:47.856658  544774 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 11:38:47.893754  544774 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 11:38:47.893883  544774 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:38:47.994273  544774 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-06 11:38:47.985093228 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:38:47.994377  544774 docker.go:319] overlay module found
	I1206 11:38:47.997771  544774 out.go:179] * Using the docker driver based on existing profile
	I1206 11:38:48.000593  544774 start.go:309] selected driver: docker
	I1206 11:38:48.000610  544774 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-432995 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-432995 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:38:48.000714  544774 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 11:38:48.001447  544774 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:38:48.093342  544774 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-06 11:38:48.071315513 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:38:48.093683  544774 cni.go:84] Creating CNI manager for ""
	I1206 11:38:48.093753  544774 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 11:38:48.093796  544774 start.go:353] cluster config:
	{Name:kubernetes-upgrade-432995 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-432995 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:38:48.097042  544774 out.go:179] * Starting "kubernetes-upgrade-432995" primary control-plane node in "kubernetes-upgrade-432995" cluster
	I1206 11:38:48.099809  544774 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 11:38:48.102683  544774 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 11:38:48.105596  544774 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 11:38:48.105652  544774 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1206 11:38:48.105666  544774 cache.go:65] Caching tarball of preloaded images
	I1206 11:38:48.105770  544774 preload.go:238] Found /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1206 11:38:48.105786  544774 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1206 11:38:48.105891  544774 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/kubernetes-upgrade-432995/config.json ...
	I1206 11:38:48.106103  544774 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 11:38:48.136315  544774 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 11:38:48.136334  544774 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 11:38:48.136348  544774 cache.go:243] Successfully downloaded all kic artifacts
	I1206 11:38:48.136386  544774 start.go:360] acquireMachinesLock for kubernetes-upgrade-432995: {Name:mkf089d5f966bcc8e1aede2109ab408d0b89aff0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:38:48.136435  544774 start.go:364] duration metric: took 32.747µs to acquireMachinesLock for "kubernetes-upgrade-432995"
	I1206 11:38:48.136455  544774 start.go:96] Skipping create...Using existing machine configuration
	I1206 11:38:48.136460  544774 fix.go:54] fixHost starting: 
	I1206 11:38:48.136720  544774 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-432995 --format={{.State.Status}}
	I1206 11:38:48.151574  544774 fix.go:112] recreateIfNeeded on kubernetes-upgrade-432995: state=Stopped err=<nil>
	W1206 11:38:48.151600  544774 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 11:38:48.154913  544774 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-432995" ...
	I1206 11:38:48.155009  544774 cli_runner.go:164] Run: docker start kubernetes-upgrade-432995
	I1206 11:38:48.482155  544774 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-432995 --format={{.State.Status}}
	I1206 11:38:48.505199  544774 kic.go:430] container "kubernetes-upgrade-432995" state is running.
	I1206 11:38:48.508003  544774 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-432995
	I1206 11:38:48.533842  544774 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/kubernetes-upgrade-432995/config.json ...
	I1206 11:38:48.534070  544774 machine.go:94] provisionDockerMachine start ...
	I1206 11:38:48.534135  544774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-432995
	I1206 11:38:48.571197  544774 main.go:143] libmachine: Using SSH client type: native
	I1206 11:38:48.571624  544774 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33388 <nil> <nil>}
	I1206 11:38:48.571641  544774 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 11:38:48.572481  544774 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56702->127.0.0.1:33388: read: connection reset by peer
	I1206 11:38:51.735367  544774 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-432995
	
	I1206 11:38:51.735404  544774 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-432995"
	I1206 11:38:51.735475  544774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-432995
	I1206 11:38:51.757689  544774 main.go:143] libmachine: Using SSH client type: native
	I1206 11:38:51.757993  544774 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33388 <nil> <nil>}
	I1206 11:38:51.758004  544774 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-432995 && echo "kubernetes-upgrade-432995" | sudo tee /etc/hostname
	I1206 11:38:51.931455  544774 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-432995
	
	I1206 11:38:51.931593  544774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-432995
	I1206 11:38:51.961643  544774 main.go:143] libmachine: Using SSH client type: native
	I1206 11:38:51.961989  544774 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33388 <nil> <nil>}
	I1206 11:38:51.962009  544774 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-432995' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-432995/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-432995' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 11:38:52.144766  544774 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 11:38:52.144795  544774 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-362985/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-362985/.minikube}
	I1206 11:38:52.144823  544774 ubuntu.go:190] setting up certificates
	I1206 11:38:52.144833  544774 provision.go:84] configureAuth start
	I1206 11:38:52.144918  544774 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-432995
	I1206 11:38:52.173245  544774 provision.go:143] copyHostCerts
	I1206 11:38:52.173332  544774 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem, removing ...
	I1206 11:38:52.173348  544774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem
	I1206 11:38:52.173425  544774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem (1123 bytes)
	I1206 11:38:52.173537  544774 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem, removing ...
	I1206 11:38:52.173542  544774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem
	I1206 11:38:52.173569  544774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem (1679 bytes)
	I1206 11:38:52.173628  544774 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem, removing ...
	I1206 11:38:52.173633  544774 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem
	I1206 11:38:52.173656  544774 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem (1082 bytes)
	I1206 11:38:52.173711  544774 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-432995 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-432995 localhost minikube]
	I1206 11:38:52.263256  544774 provision.go:177] copyRemoteCerts
	I1206 11:38:52.263402  544774 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 11:38:52.263482  544774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-432995
	I1206 11:38:52.282131  544774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/kubernetes-upgrade-432995/id_rsa Username:docker}
	I1206 11:38:52.388437  544774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 11:38:52.410789  544774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1206 11:38:52.431850  544774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1206 11:38:52.453221  544774 provision.go:87] duration metric: took 308.361516ms to configureAuth
	I1206 11:38:52.453245  544774 ubuntu.go:206] setting minikube options for container-runtime
	I1206 11:38:52.453432  544774 config.go:182] Loaded profile config "kubernetes-upgrade-432995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 11:38:52.453534  544774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-432995
	I1206 11:38:52.477950  544774 main.go:143] libmachine: Using SSH client type: native
	I1206 11:38:52.478283  544774 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33388 <nil> <nil>}
	I1206 11:38:52.478297  544774 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 11:38:52.889619  544774 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 11:38:52.889709  544774 machine.go:97] duration metric: took 4.355628679s to provisionDockerMachine
	I1206 11:38:52.889736  544774 start.go:293] postStartSetup for "kubernetes-upgrade-432995" (driver="docker")
	I1206 11:38:52.889768  544774 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 11:38:52.889854  544774 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 11:38:52.889926  544774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-432995
	I1206 11:38:52.913005  544774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/kubernetes-upgrade-432995/id_rsa Username:docker}
	I1206 11:38:53.024607  544774 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 11:38:53.028871  544774 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 11:38:53.028896  544774 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 11:38:53.028907  544774 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-362985/.minikube/addons for local assets ...
	I1206 11:38:53.028961  544774 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-362985/.minikube/files for local assets ...
	I1206 11:38:53.029041  544774 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem -> 3648552.pem in /etc/ssl/certs
	I1206 11:38:53.029138  544774 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 11:38:53.037590  544774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem --> /etc/ssl/certs/3648552.pem (1708 bytes)
	I1206 11:38:53.058686  544774 start.go:296] duration metric: took 168.922255ms for postStartSetup
	I1206 11:38:53.058808  544774 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 11:38:53.058901  544774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-432995
	I1206 11:38:53.079839  544774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/kubernetes-upgrade-432995/id_rsa Username:docker}
	I1206 11:38:53.193589  544774 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 11:38:53.199249  544774 fix.go:56] duration metric: took 5.062782212s for fixHost
	I1206 11:38:53.199276  544774 start.go:83] releasing machines lock for "kubernetes-upgrade-432995", held for 5.062833831s
	I1206 11:38:53.199344  544774 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-432995
	I1206 11:38:53.218441  544774 ssh_runner.go:195] Run: cat /version.json
	I1206 11:38:53.218501  544774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-432995
	I1206 11:38:53.218744  544774 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 11:38:53.218800  544774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-432995
	I1206 11:38:53.252796  544774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/kubernetes-upgrade-432995/id_rsa Username:docker}
	I1206 11:38:53.255212  544774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33388 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/kubernetes-upgrade-432995/id_rsa Username:docker}
	I1206 11:38:53.447077  544774 ssh_runner.go:195] Run: systemctl --version
	I1206 11:38:53.454374  544774 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 11:38:53.517661  544774 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 11:38:53.526937  544774 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 11:38:53.527018  544774 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 11:38:53.541381  544774 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 11:38:53.541447  544774 start.go:496] detecting cgroup driver to use...
	I1206 11:38:53.541484  544774 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 11:38:53.541569  544774 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 11:38:53.569510  544774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 11:38:53.585955  544774 docker.go:218] disabling cri-docker service (if available) ...
	I1206 11:38:53.586055  544774 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 11:38:53.603759  544774 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 11:38:53.618960  544774 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 11:38:53.762147  544774 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 11:38:53.925806  544774 docker.go:234] disabling docker service ...
	I1206 11:38:53.925923  544774 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 11:38:53.944018  544774 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 11:38:53.959371  544774 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 11:38:54.129951  544774 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 11:38:54.306482  544774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 11:38:54.321208  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 11:38:54.350014  544774 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 11:38:54.350104  544774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:38:54.359236  544774 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 11:38:54.359326  544774 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:38:54.368104  544774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:38:54.377877  544774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:38:54.387075  544774 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 11:38:54.395690  544774 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:38:54.405120  544774 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:38:54.413998  544774 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:38:54.423361  544774 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 11:38:54.431994  544774 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 11:38:54.440185  544774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:38:54.593468  544774 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 11:38:54.832104  544774 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 11:38:54.832187  544774 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 11:38:54.836793  544774 start.go:564] Will wait 60s for crictl version
	I1206 11:38:54.836874  544774 ssh_runner.go:195] Run: which crictl
	I1206 11:38:54.842805  544774 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 11:38:54.882335  544774 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 11:38:54.882448  544774 ssh_runner.go:195] Run: crio --version
	I1206 11:38:54.918802  544774 ssh_runner.go:195] Run: crio --version
	I1206 11:38:54.957853  544774 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1206 11:38:54.960727  544774 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-432995 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 11:38:54.982546  544774 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1206 11:38:54.994344  544774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 11:38:55.007303  544774 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-432995 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-432995 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 11:38:55.007516  544774 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1206 11:38:55.007580  544774 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:38:55.092739  544774 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1206 11:38:55.092815  544774 ssh_runner.go:195] Run: which lz4
	I1206 11:38:55.100148  544774 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1206 11:38:55.109895  544774 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1206 11:38:55.109929  544774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (306100841 bytes)
	I1206 11:38:58.188582  544774 crio.go:462] duration metric: took 3.088486558s to copy over tarball
	I1206 11:38:58.188676  544774 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1206 11:39:00.913347  544774 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.724641393s)
	I1206 11:39:00.913380  544774 crio.go:469] duration metric: took 2.724772347s to extract the tarball
	I1206 11:39:00.913388  544774 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1206 11:39:00.968081  544774 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:39:01.017138  544774 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 11:39:01.017208  544774 cache_images.go:86] Images are preloaded, skipping loading
	I1206 11:39:01.017242  544774 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1206 11:39:01.017380  544774 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-432995 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-432995 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 11:39:01.017510  544774 ssh_runner.go:195] Run: crio config
	I1206 11:39:01.126180  544774 cni.go:84] Creating CNI manager for ""
	I1206 11:39:01.126242  544774 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 11:39:01.126293  544774 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 11:39:01.126335  544774 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-432995 NodeName:kubernetes-upgrade-432995 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 11:39:01.126519  544774 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-432995"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 11:39:01.126631  544774 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1206 11:39:01.135653  544774 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 11:39:01.135801  544774 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 11:39:01.149404  544774 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1206 11:39:01.168474  544774 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1206 11:39:01.185220  544774 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1206 11:39:01.200873  544774 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1206 11:39:01.206353  544774 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 11:39:01.220137  544774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:39:01.464494  544774 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 11:39:01.487747  544774 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/kubernetes-upgrade-432995 for IP: 192.168.76.2
	I1206 11:39:01.487817  544774 certs.go:195] generating shared ca certs ...
	I1206 11:39:01.487848  544774 certs.go:227] acquiring lock for ca certs: {Name:mke2ec61a37b6f3abbcbeb9abd23d6a19d011dd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:39:01.488027  544774 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.key
	I1206 11:39:01.488124  544774 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.key
	I1206 11:39:01.488163  544774 certs.go:257] generating profile certs ...
	I1206 11:39:01.488608  544774 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/kubernetes-upgrade-432995/client.key
	I1206 11:39:01.488791  544774 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/kubernetes-upgrade-432995/apiserver.key.2cd18289
	I1206 11:39:01.488907  544774 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/kubernetes-upgrade-432995/proxy-client.key
	I1206 11:39:01.489095  544774 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855.pem (1338 bytes)
	W1206 11:39:01.489173  544774 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855_empty.pem, impossibly tiny 0 bytes
	I1206 11:39:01.489214  544774 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 11:39:01.489267  544774 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem (1082 bytes)
	I1206 11:39:01.489326  544774 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem (1123 bytes)
	I1206 11:39:01.489403  544774 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem (1679 bytes)
	I1206 11:39:01.489496  544774 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem (1708 bytes)
	I1206 11:39:01.491263  544774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 11:39:01.523114  544774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 11:39:01.580684  544774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 11:39:01.631846  544774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 11:39:01.671835  544774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/kubernetes-upgrade-432995/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1206 11:39:01.698399  544774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/kubernetes-upgrade-432995/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 11:39:01.726827  544774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/kubernetes-upgrade-432995/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 11:39:01.756540  544774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/kubernetes-upgrade-432995/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1206 11:39:01.790336  544774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855.pem --> /usr/share/ca-certificates/364855.pem (1338 bytes)
	I1206 11:39:01.824001  544774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem --> /usr/share/ca-certificates/3648552.pem (1708 bytes)
	I1206 11:39:01.856572  544774 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 11:39:01.886007  544774 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 11:39:01.913111  544774 ssh_runner.go:195] Run: openssl version
	I1206 11:39:01.924984  544774 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364855.pem
	I1206 11:39:01.937181  544774 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364855.pem /etc/ssl/certs/364855.pem
	I1206 11:39:01.949091  544774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364855.pem
	I1206 11:39:01.953735  544774 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:36 /usr/share/ca-certificates/364855.pem
	I1206 11:39:01.953851  544774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364855.pem
	I1206 11:39:02.008514  544774 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 11:39:02.020841  544774 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648552.pem
	I1206 11:39:02.031151  544774 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648552.pem /etc/ssl/certs/3648552.pem
	I1206 11:39:02.040688  544774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648552.pem
	I1206 11:39:02.048108  544774 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:36 /usr/share/ca-certificates/3648552.pem
	I1206 11:39:02.048235  544774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648552.pem
	I1206 11:39:02.098547  544774 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 11:39:02.106394  544774 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:39:02.114175  544774 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 11:39:02.122340  544774 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:39:02.130995  544774 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:26 /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:39:02.131118  544774 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:39:02.184314  544774 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 11:39:02.192319  544774 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 11:39:02.201127  544774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 11:39:02.288369  544774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 11:39:02.344225  544774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 11:39:02.399349  544774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 11:39:02.458440  544774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 11:39:02.512686  544774 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 11:39:02.558978  544774 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-432995 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-432995 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:39:02.559118  544774 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 11:39:02.559223  544774 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 11:39:02.597854  544774 cri.go:89] found id: ""
	I1206 11:39:02.597978  544774 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 11:39:02.607399  544774 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 11:39:02.607468  544774 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 11:39:02.607557  544774 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 11:39:02.620219  544774 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 11:39:02.620762  544774 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-432995" does not appear in /home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 11:39:02.620941  544774 kubeconfig.go:62] /home/jenkins/minikube-integration/22047-362985/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-432995" cluster setting kubeconfig missing "kubernetes-upgrade-432995" context setting]
	I1206 11:39:02.621372  544774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/kubeconfig: {Name:mk779651834cfbdc6f0b5e8f5a9abc0f05106181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:39:02.622069  544774 kapi.go:59] client config for kubernetes-upgrade-432995: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22047-362985/.minikube/profiles/kubernetes-upgrade-432995/client.crt", KeyFile:"/home/jenkins/minikube-integration/22047-362985/.minikube/profiles/kubernetes-upgrade-432995/client.key", CAFile:"/home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 11:39:02.622819  544774 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1206 11:39:02.623011  544774 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1206 11:39:02.623042  544774 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1206 11:39:02.623072  544774 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1206 11:39:02.623092  544774 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1206 11:39:02.623617  544774 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 11:39:02.638350  544774 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-06 11:38:22.619603762 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-06 11:39:01.195989617 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///var/run/crio/crio.sock
	   name: "kubernetes-upgrade-432995"
	   kubeletExtraArgs:
	-    node-ip: 192.168.76.2
	+    - name: "node-ip"
	+      value: "192.168.76.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-beta.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1206 11:39:02.638425  544774 kubeadm.go:1161] stopping kube-system containers ...
	I1206 11:39:02.638467  544774 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1206 11:39:02.638566  544774 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 11:39:02.703093  544774 cri.go:89] found id: ""
	I1206 11:39:02.703218  544774 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1206 11:39:02.723066  544774 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 11:39:02.736080  544774 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5643 Dec  6 11:38 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Dec  6 11:38 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec  6 11:38 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Dec  6 11:38 /etc/kubernetes/scheduler.conf
	
	I1206 11:39:02.736203  544774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 11:39:02.752910  544774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 11:39:02.767975  544774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 11:39:02.777808  544774 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 11:39:02.777930  544774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 11:39:02.788876  544774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 11:39:02.798325  544774 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1206 11:39:02.798444  544774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 11:39:02.809877  544774 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 11:39:02.819612  544774 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 11:39:02.900861  544774 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 11:39:04.309372  544774 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.408420749s)
	I1206 11:39:04.309538  544774 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1206 11:39:04.650091  544774 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1206 11:39:04.809395  544774 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1206 11:39:04.916615  544774 api_server.go:52] waiting for apiserver process to appear ...
	I1206 11:39:04.916693  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:05.417290  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:05.916830  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:06.416906  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:06.917083  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:07.417584  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:07.917374  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:08.417462  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:08.917231  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:09.416944  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:09.917456  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:10.417388  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:10.917357  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:11.417567  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:11.916822  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:12.417695  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:12.917473  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:13.417507  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:13.916899  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:14.417028  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:14.917860  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:15.417632  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:15.917474  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:16.416803  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:16.916870  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:17.416847  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:17.916811  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:18.416803  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:18.917390  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:19.416821  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:19.916865  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:20.417344  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:20.917648  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:21.417317  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:21.916875  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:22.417528  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:22.917589  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:23.417529  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:23.917560  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:24.417506  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:24.917575  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:25.417422  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:25.917425  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:26.417554  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:26.916843  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:27.417724  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:27.916766  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:28.417293  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:28.916915  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:29.417001  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:29.916872  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:30.416898  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:30.917599  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:31.417598  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:31.917455  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:32.416975  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:32.917107  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:33.416924  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:33.917166  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:34.417610  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:34.916848  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:35.417528  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:35.917477  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:36.417481  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:36.916933  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:37.416835  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:37.917606  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:38.417495  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:38.916867  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:39.416827  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:39.917708  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:40.417572  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:40.916835  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:41.416937  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:41.916961  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:42.416793  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:42.917145  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:43.416814  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:43.916836  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:44.417442  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:44.916846  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:45.417601  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:45.917585  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:46.417695  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:46.917633  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:47.417478  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:47.917807  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:48.417752  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:48.917494  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:49.417357  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:49.916909  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:50.417076  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:50.917522  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:51.417402  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:51.916858  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:52.417459  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:52.917864  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:53.417140  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:53.916898  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:54.416876  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:54.916835  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:55.416886  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:55.916960  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:56.416986  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:56.917383  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:57.417851  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:57.917373  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:58.416887  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:58.916824  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:59.417551  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:39:59.917579  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:40:00.419011  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:40:00.917633  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:40:01.417798  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:40:01.917542  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:40:02.416891  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:40:02.917464  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:40:03.417461  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:40:03.917601  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:40:04.417656  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:40:04.916811  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:40:04.916899  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:40:04.942730  544774 cri.go:89] found id: ""
	I1206 11:40:04.942757  544774 logs.go:282] 0 containers: []
	W1206 11:40:04.942766  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:40:04.942772  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:40:04.942834  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:40:04.970716  544774 cri.go:89] found id: ""
	I1206 11:40:04.970742  544774 logs.go:282] 0 containers: []
	W1206 11:40:04.970763  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:40:04.970770  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:40:04.970837  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:40:04.997176  544774 cri.go:89] found id: ""
	I1206 11:40:04.997202  544774 logs.go:282] 0 containers: []
	W1206 11:40:04.997211  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:40:04.997217  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:40:04.997278  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:40:05.026435  544774 cri.go:89] found id: ""
	I1206 11:40:05.026511  544774 logs.go:282] 0 containers: []
	W1206 11:40:05.026534  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:40:05.026555  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:40:05.026649  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:40:05.053356  544774 cri.go:89] found id: ""
	I1206 11:40:05.053380  544774 logs.go:282] 0 containers: []
	W1206 11:40:05.053389  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:40:05.053395  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:40:05.053465  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:40:05.085527  544774 cri.go:89] found id: ""
	I1206 11:40:05.085550  544774 logs.go:282] 0 containers: []
	W1206 11:40:05.085559  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:40:05.085566  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:40:05.085628  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:40:05.119145  544774 cri.go:89] found id: ""
	I1206 11:40:05.119214  544774 logs.go:282] 0 containers: []
	W1206 11:40:05.119242  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:40:05.119259  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:40:05.119327  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:40:05.153550  544774 cri.go:89] found id: ""
	I1206 11:40:05.153575  544774 logs.go:282] 0 containers: []
	W1206 11:40:05.153584  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:40:05.153594  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:40:05.153606  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:40:05.223703  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:40:05.223742  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:40:05.245458  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:40:05.245489  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:40:05.430548  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:40:05.430570  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:40:05.430583  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:40:05.461922  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:40:05.461956  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:40:07.998987  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:40:08.013592  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:40:08.013672  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:40:08.044339  544774 cri.go:89] found id: ""
	I1206 11:40:08.044369  544774 logs.go:282] 0 containers: []
	W1206 11:40:08.044378  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:40:08.044384  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:40:08.044447  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:40:08.071568  544774 cri.go:89] found id: ""
	I1206 11:40:08.071597  544774 logs.go:282] 0 containers: []
	W1206 11:40:08.071607  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:40:08.071614  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:40:08.071682  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:40:08.108665  544774 cri.go:89] found id: ""
	I1206 11:40:08.108692  544774 logs.go:282] 0 containers: []
	W1206 11:40:08.108701  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:40:08.108709  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:40:08.108820  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:40:08.136231  544774 cri.go:89] found id: ""
	I1206 11:40:08.136303  544774 logs.go:282] 0 containers: []
	W1206 11:40:08.136328  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:40:08.136347  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:40:08.136418  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:40:08.166620  544774 cri.go:89] found id: ""
	I1206 11:40:08.166698  544774 logs.go:282] 0 containers: []
	W1206 11:40:08.166721  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:40:08.166739  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:40:08.166835  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:40:08.193194  544774 cri.go:89] found id: ""
	I1206 11:40:08.193225  544774 logs.go:282] 0 containers: []
	W1206 11:40:08.193234  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:40:08.193241  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:40:08.193300  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:40:08.219158  544774 cri.go:89] found id: ""
	I1206 11:40:08.219183  544774 logs.go:282] 0 containers: []
	W1206 11:40:08.219192  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:40:08.219198  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:40:08.219257  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:40:08.245816  544774 cri.go:89] found id: ""
	I1206 11:40:08.245848  544774 logs.go:282] 0 containers: []
	W1206 11:40:08.245857  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:40:08.245866  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:40:08.245878  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:40:08.317731  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:40:08.317776  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:40:08.334526  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:40:08.334603  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:40:08.400047  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:40:08.400065  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:40:08.400078  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:40:08.432082  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:40:08.432115  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:40:10.964022  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:40:10.975199  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:40:10.975279  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:40:11.010165  544774 cri.go:89] found id: ""
	I1206 11:40:11.010263  544774 logs.go:282] 0 containers: []
	W1206 11:40:11.010289  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:40:11.010305  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:40:11.010436  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:40:11.039986  544774 cri.go:89] found id: ""
	I1206 11:40:11.040015  544774 logs.go:282] 0 containers: []
	W1206 11:40:11.040032  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:40:11.040040  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:40:11.040109  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:40:11.068148  544774 cri.go:89] found id: ""
	I1206 11:40:11.068174  544774 logs.go:282] 0 containers: []
	W1206 11:40:11.068184  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:40:11.068190  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:40:11.068253  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:40:11.099117  544774 cri.go:89] found id: ""
	I1206 11:40:11.099153  544774 logs.go:282] 0 containers: []
	W1206 11:40:11.099164  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:40:11.099191  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:40:11.099326  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:40:11.133071  544774 cri.go:89] found id: ""
	I1206 11:40:11.133096  544774 logs.go:282] 0 containers: []
	W1206 11:40:11.133104  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:40:11.133112  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:40:11.133178  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:40:11.165348  544774 cri.go:89] found id: ""
	I1206 11:40:11.165387  544774 logs.go:282] 0 containers: []
	W1206 11:40:11.165397  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:40:11.165404  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:40:11.165473  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:40:11.197473  544774 cri.go:89] found id: ""
	I1206 11:40:11.197501  544774 logs.go:282] 0 containers: []
	W1206 11:40:11.197510  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:40:11.197517  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:40:11.197580  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:40:11.229095  544774 cri.go:89] found id: ""
	I1206 11:40:11.229131  544774 logs.go:282] 0 containers: []
	W1206 11:40:11.229142  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:40:11.229151  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:40:11.229163  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:40:11.297152  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:40:11.297176  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:40:11.297189  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:40:11.330353  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:40:11.330389  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:40:11.362602  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:40:11.362631  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:40:11.431818  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:40:11.431855  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:40:13.949109  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:40:13.959867  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:40:13.959975  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:40:13.988106  544774 cri.go:89] found id: ""
	I1206 11:40:13.988133  544774 logs.go:282] 0 containers: []
	W1206 11:40:13.988142  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:40:13.988149  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:40:13.988222  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:40:14.026381  544774 cri.go:89] found id: ""
	I1206 11:40:14.026412  544774 logs.go:282] 0 containers: []
	W1206 11:40:14.026421  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:40:14.026437  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:40:14.026524  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:40:14.055906  544774 cri.go:89] found id: ""
	I1206 11:40:14.055935  544774 logs.go:282] 0 containers: []
	W1206 11:40:14.055944  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:40:14.055951  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:40:14.056014  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:40:14.084653  544774 cri.go:89] found id: ""
	I1206 11:40:14.084675  544774 logs.go:282] 0 containers: []
	W1206 11:40:14.084684  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:40:14.084690  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:40:14.084749  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:40:14.113624  544774 cri.go:89] found id: ""
	I1206 11:40:14.113708  544774 logs.go:282] 0 containers: []
	W1206 11:40:14.113731  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:40:14.113751  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:40:14.113864  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:40:14.142093  544774 cri.go:89] found id: ""
	I1206 11:40:14.142176  544774 logs.go:282] 0 containers: []
	W1206 11:40:14.142201  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:40:14.142221  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:40:14.142333  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:40:14.174701  544774 cri.go:89] found id: ""
	I1206 11:40:14.174735  544774 logs.go:282] 0 containers: []
	W1206 11:40:14.174744  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:40:14.174751  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:40:14.174829  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:40:14.203084  544774 cri.go:89] found id: ""
	I1206 11:40:14.203151  544774 logs.go:282] 0 containers: []
	W1206 11:40:14.203182  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:40:14.203207  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:40:14.203271  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:40:14.271634  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:40:14.271669  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:40:14.288265  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:40:14.288347  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:40:14.358313  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:40:14.358387  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:40:14.358413  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:40:14.389031  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:40:14.389066  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:40:16.923947  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:40:16.935176  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:40:16.935245  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:40:16.964445  544774 cri.go:89] found id: ""
	I1206 11:40:16.964525  544774 logs.go:282] 0 containers: []
	W1206 11:40:16.964541  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:40:16.964548  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:40:16.964619  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:40:16.991158  544774 cri.go:89] found id: ""
	I1206 11:40:16.991181  544774 logs.go:282] 0 containers: []
	W1206 11:40:16.991190  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:40:16.991196  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:40:16.991256  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:40:17.027349  544774 cri.go:89] found id: ""
	I1206 11:40:17.027401  544774 logs.go:282] 0 containers: []
	W1206 11:40:17.027412  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:40:17.027419  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:40:17.027483  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:40:17.058297  544774 cri.go:89] found id: ""
	I1206 11:40:17.058325  544774 logs.go:282] 0 containers: []
	W1206 11:40:17.058344  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:40:17.058351  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:40:17.058418  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:40:17.091177  544774 cri.go:89] found id: ""
	I1206 11:40:17.091207  544774 logs.go:282] 0 containers: []
	W1206 11:40:17.091217  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:40:17.091224  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:40:17.091286  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:40:17.124868  544774 cri.go:89] found id: ""
	I1206 11:40:17.124892  544774 logs.go:282] 0 containers: []
	W1206 11:40:17.124901  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:40:17.124907  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:40:17.124969  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:40:17.151524  544774 cri.go:89] found id: ""
	I1206 11:40:17.151598  544774 logs.go:282] 0 containers: []
	W1206 11:40:17.151622  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:40:17.151641  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:40:17.151725  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:40:17.177792  544774 cri.go:89] found id: ""
	I1206 11:40:17.177813  544774 logs.go:282] 0 containers: []
	W1206 11:40:17.177822  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:40:17.177831  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:40:17.177842  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:40:17.246014  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:40:17.246054  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:40:17.262226  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:40:17.262257  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:40:17.327305  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:40:17.327367  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:40:17.327420  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:40:17.357923  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:40:17.357958  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:40:19.895681  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:40:19.906085  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:40:19.906166  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:40:19.932351  544774 cri.go:89] found id: ""
	I1206 11:40:19.932372  544774 logs.go:282] 0 containers: []
	W1206 11:40:19.932381  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:40:19.932387  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:40:19.932463  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:40:19.959558  544774 cri.go:89] found id: ""
	I1206 11:40:19.959582  544774 logs.go:282] 0 containers: []
	W1206 11:40:19.959590  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:40:19.959597  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:40:19.959656  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:40:19.987116  544774 cri.go:89] found id: ""
	I1206 11:40:19.987145  544774 logs.go:282] 0 containers: []
	W1206 11:40:19.987155  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:40:19.987161  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:40:19.987231  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:40:20.022045  544774 cri.go:89] found id: ""
	I1206 11:40:20.022223  544774 logs.go:282] 0 containers: []
	W1206 11:40:20.022252  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:40:20.022272  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:40:20.022395  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:40:20.055214  544774 cri.go:89] found id: ""
	I1206 11:40:20.055241  544774 logs.go:282] 0 containers: []
	W1206 11:40:20.055250  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:40:20.055256  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:40:20.055331  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:40:20.082503  544774 cri.go:89] found id: ""
	I1206 11:40:20.082576  544774 logs.go:282] 0 containers: []
	W1206 11:40:20.082602  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:40:20.082620  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:40:20.082707  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:40:20.109889  544774 cri.go:89] found id: ""
	I1206 11:40:20.109918  544774 logs.go:282] 0 containers: []
	W1206 11:40:20.109927  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:40:20.109934  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:40:20.109993  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:40:20.137692  544774 cri.go:89] found id: ""
	I1206 11:40:20.137771  544774 logs.go:282] 0 containers: []
	W1206 11:40:20.137794  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:40:20.137816  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:40:20.137852  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:40:20.206316  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:40:20.206354  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:40:20.222924  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:40:20.222965  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:40:20.288562  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:40:20.288586  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:40:20.288600  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:40:20.320083  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:40:20.320118  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:40:22.854126  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:40:22.864467  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:40:22.864582  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:40:22.889991  544774 cri.go:89] found id: ""
	I1206 11:40:22.890015  544774 logs.go:282] 0 containers: []
	W1206 11:40:22.890024  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:40:22.890031  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:40:22.890099  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:40:22.920239  544774 cri.go:89] found id: ""
	I1206 11:40:22.920306  544774 logs.go:282] 0 containers: []
	W1206 11:40:22.920320  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:40:22.920327  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:40:22.920405  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:40:22.948096  544774 cri.go:89] found id: ""
	I1206 11:40:22.948130  544774 logs.go:282] 0 containers: []
	W1206 11:40:22.948139  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:40:22.948145  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:40:22.948233  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:40:22.975271  544774 cri.go:89] found id: ""
	I1206 11:40:22.975307  544774 logs.go:282] 0 containers: []
	W1206 11:40:22.975317  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:40:22.975323  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:40:22.975411  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:40:23.013266  544774 cri.go:89] found id: ""
	I1206 11:40:23.013295  544774 logs.go:282] 0 containers: []
	W1206 11:40:23.013305  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:40:23.013312  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:40:23.013375  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:40:23.040882  544774 cri.go:89] found id: ""
	I1206 11:40:23.040911  544774 logs.go:282] 0 containers: []
	W1206 11:40:23.040920  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:40:23.040932  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:40:23.040999  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:40:23.067437  544774 cri.go:89] found id: ""
	I1206 11:40:23.067463  544774 logs.go:282] 0 containers: []
	W1206 11:40:23.067471  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:40:23.067478  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:40:23.067535  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:40:23.094995  544774 cri.go:89] found id: ""
	I1206 11:40:23.095016  544774 logs.go:282] 0 containers: []
	W1206 11:40:23.095024  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:40:23.095033  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:40:23.095045  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:40:23.165545  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:40:23.165569  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:40:23.165582  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:40:23.196981  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:40:23.197015  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:40:23.226535  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:40:23.226559  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:40:23.303306  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:40:23.303351  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:40:25.820645  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:40:25.831955  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:40:25.832033  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:40:25.861158  544774 cri.go:89] found id: ""
	I1206 11:40:25.861180  544774 logs.go:282] 0 containers: []
	W1206 11:40:25.861189  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:40:25.861196  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:40:25.861255  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:40:25.887352  544774 cri.go:89] found id: ""
	I1206 11:40:25.887416  544774 logs.go:282] 0 containers: []
	W1206 11:40:25.887426  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:40:25.887432  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:40:25.887489  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:40:25.913511  544774 cri.go:89] found id: ""
	I1206 11:40:25.913536  544774 logs.go:282] 0 containers: []
	W1206 11:40:25.913545  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:40:25.913552  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:40:25.913609  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:40:25.940306  544774 cri.go:89] found id: ""
	I1206 11:40:25.940334  544774 logs.go:282] 0 containers: []
	W1206 11:40:25.940344  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:40:25.940350  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:40:25.940421  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:40:25.965521  544774 cri.go:89] found id: ""
	I1206 11:40:25.965545  544774 logs.go:282] 0 containers: []
	W1206 11:40:25.965554  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:40:25.965560  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:40:25.965619  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:40:25.993470  544774 cri.go:89] found id: ""
	I1206 11:40:25.993496  544774 logs.go:282] 0 containers: []
	W1206 11:40:25.993505  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:40:25.993513  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:40:25.993573  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:40:26.020945  544774 cri.go:89] found id: ""
	I1206 11:40:26.020967  544774 logs.go:282] 0 containers: []
	W1206 11:40:26.020976  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:40:26.020984  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:40:26.021048  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:40:26.049485  544774 cri.go:89] found id: ""
	I1206 11:40:26.049509  544774 logs.go:282] 0 containers: []
	W1206 11:40:26.049518  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:40:26.049527  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:40:26.049539  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:40:26.117697  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:40:26.117776  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:40:26.136114  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:40:26.136146  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:40:26.201554  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:40:26.201576  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:40:26.201589  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:40:26.232027  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:40:26.232117  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:40:28.761925  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:40:28.772757  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:40:28.772823  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:40:28.798848  544774 cri.go:89] found id: ""
	I1206 11:40:28.798870  544774 logs.go:282] 0 containers: []
	W1206 11:40:28.798878  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:40:28.798884  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:40:28.798950  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:40:28.824770  544774 cri.go:89] found id: ""
	I1206 11:40:28.824797  544774 logs.go:282] 0 containers: []
	W1206 11:40:28.824806  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:40:28.824812  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:40:28.824875  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:40:28.855426  544774 cri.go:89] found id: ""
	I1206 11:40:28.855447  544774 logs.go:282] 0 containers: []
	W1206 11:40:28.855458  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:40:28.855464  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:40:28.855526  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:40:28.881114  544774 cri.go:89] found id: ""
	I1206 11:40:28.881137  544774 logs.go:282] 0 containers: []
	W1206 11:40:28.881145  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:40:28.881152  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:40:28.881210  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:40:28.912027  544774 cri.go:89] found id: ""
	I1206 11:40:28.912050  544774 logs.go:282] 0 containers: []
	W1206 11:40:28.912059  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:40:28.912066  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:40:28.912130  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:40:28.943727  544774 cri.go:89] found id: ""
	I1206 11:40:28.943762  544774 logs.go:282] 0 containers: []
	W1206 11:40:28.943772  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:40:28.943779  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:40:28.943848  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:40:28.998290  544774 cri.go:89] found id: ""
	I1206 11:40:28.998320  544774 logs.go:282] 0 containers: []
	W1206 11:40:28.998331  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:40:28.998338  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:40:28.998398  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:40:29.037957  544774 cri.go:89] found id: ""
	I1206 11:40:29.037984  544774 logs.go:282] 0 containers: []
	W1206 11:40:29.037992  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:40:29.038002  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:40:29.038014  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:40:29.132772  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:40:29.132811  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:40:29.157574  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:40:29.157698  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:40:29.243617  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:40:29.243638  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:40:29.243651  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:40:29.278239  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:40:29.278274  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:40:31.827576  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:40:31.838409  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:40:31.838493  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:40:31.864706  544774 cri.go:89] found id: ""
	I1206 11:40:31.864732  544774 logs.go:282] 0 containers: []
	W1206 11:40:31.864742  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:40:31.864748  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:40:31.864809  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:40:31.892615  544774 cri.go:89] found id: ""
	I1206 11:40:31.892638  544774 logs.go:282] 0 containers: []
	W1206 11:40:31.892647  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:40:31.892653  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:40:31.892714  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:40:31.920385  544774 cri.go:89] found id: ""
	I1206 11:40:31.920407  544774 logs.go:282] 0 containers: []
	W1206 11:40:31.920416  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:40:31.920422  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:40:31.920496  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:40:31.952799  544774 cri.go:89] found id: ""
	I1206 11:40:31.952822  544774 logs.go:282] 0 containers: []
	W1206 11:40:31.952831  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:40:31.952839  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:40:31.952906  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:40:31.983636  544774 cri.go:89] found id: ""
	I1206 11:40:31.983660  544774 logs.go:282] 0 containers: []
	W1206 11:40:31.983668  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:40:31.983675  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:40:31.983734  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:40:32.014129  544774 cri.go:89] found id: ""
	I1206 11:40:32.014173  544774 logs.go:282] 0 containers: []
	W1206 11:40:32.014184  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:40:32.014191  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:40:32.014285  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:40:32.040940  544774 cri.go:89] found id: ""
	I1206 11:40:32.041008  544774 logs.go:282] 0 containers: []
	W1206 11:40:32.041030  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:40:32.041048  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:40:32.041131  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:40:32.067857  544774 cri.go:89] found id: ""
	I1206 11:40:32.067938  544774 logs.go:282] 0 containers: []
	W1206 11:40:32.067961  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:40:32.067978  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:40:32.068004  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:40:32.096994  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:40:32.097024  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:40:32.169499  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:40:32.169535  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:40:32.186680  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:40:32.186710  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:40:32.253609  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:40:32.253634  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:40:32.253648  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:40:34.785138  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:40:34.795928  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:40:34.795996  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:40:34.822379  544774 cri.go:89] found id: ""
	I1206 11:40:34.822401  544774 logs.go:282] 0 containers: []
	W1206 11:40:34.822410  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:40:34.822417  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:40:34.822478  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:40:34.849381  544774 cri.go:89] found id: ""
	I1206 11:40:34.849407  544774 logs.go:282] 0 containers: []
	W1206 11:40:34.849416  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:40:34.849422  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:40:34.849486  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:40:34.875765  544774 cri.go:89] found id: ""
	I1206 11:40:34.875791  544774 logs.go:282] 0 containers: []
	W1206 11:40:34.875801  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:40:34.875816  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:40:34.875878  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:40:34.902347  544774 cri.go:89] found id: ""
	I1206 11:40:34.902368  544774 logs.go:282] 0 containers: []
	W1206 11:40:34.902376  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:40:34.902382  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:40:34.902442  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:40:34.928396  544774 cri.go:89] found id: ""
	I1206 11:40:34.928423  544774 logs.go:282] 0 containers: []
	W1206 11:40:34.928432  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:40:34.928438  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:40:34.928497  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:40:34.954987  544774 cri.go:89] found id: ""
	I1206 11:40:34.955014  544774 logs.go:282] 0 containers: []
	W1206 11:40:34.955023  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:40:34.955029  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:40:34.955097  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:40:34.981845  544774 cri.go:89] found id: ""
	I1206 11:40:34.981872  544774 logs.go:282] 0 containers: []
	W1206 11:40:34.981881  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:40:34.981888  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:40:34.981952  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:40:35.025968  544774 cri.go:89] found id: ""
	I1206 11:40:35.025998  544774 logs.go:282] 0 containers: []
	W1206 11:40:35.026008  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:40:35.026018  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:40:35.026033  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:40:35.098182  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:40:35.098225  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:40:35.121647  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:40:35.121681  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:40:35.199215  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:40:35.199295  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:40:35.199513  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:40:35.233585  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:40:35.233623  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:40:37.766349  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:40:37.777689  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:40:37.777765  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:40:37.813975  544774 cri.go:89] found id: ""
	I1206 11:40:37.813999  544774 logs.go:282] 0 containers: []
	W1206 11:40:37.814008  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:40:37.814016  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:40:37.814091  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:40:37.842271  544774 cri.go:89] found id: ""
	I1206 11:40:37.842295  544774 logs.go:282] 0 containers: []
	W1206 11:40:37.842304  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:40:37.842310  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:40:37.842367  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:40:37.869496  544774 cri.go:89] found id: ""
	I1206 11:40:37.869523  544774 logs.go:282] 0 containers: []
	W1206 11:40:37.869534  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:40:37.869541  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:40:37.869607  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:40:37.896311  544774 cri.go:89] found id: ""
	I1206 11:40:37.896335  544774 logs.go:282] 0 containers: []
	W1206 11:40:37.896344  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:40:37.896350  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:40:37.896413  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:40:37.922082  544774 cri.go:89] found id: ""
	I1206 11:40:37.922109  544774 logs.go:282] 0 containers: []
	W1206 11:40:37.922119  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:40:37.922125  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:40:37.922193  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:40:37.948583  544774 cri.go:89] found id: ""
	I1206 11:40:37.948607  544774 logs.go:282] 0 containers: []
	W1206 11:40:37.948615  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:40:37.948622  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:40:37.948685  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:40:37.975330  544774 cri.go:89] found id: ""
	I1206 11:40:37.975357  544774 logs.go:282] 0 containers: []
	W1206 11:40:37.975367  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:40:37.975396  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:40:37.975460  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:40:38.003293  544774 cri.go:89] found id: ""
	I1206 11:40:38.003321  544774 logs.go:282] 0 containers: []
	W1206 11:40:38.003330  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:40:38.003339  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:40:38.003355  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:40:38.046585  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:40:38.046618  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:40:38.120903  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:40:38.120995  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:40:38.144901  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:40:38.144948  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:40:38.214446  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:40:38.214464  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:40:38.214504  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:40:40.745845  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:40:40.757102  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:40:40.757172  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:40:40.796716  544774 cri.go:89] found id: ""
	I1206 11:40:40.796740  544774 logs.go:282] 0 containers: []
	W1206 11:40:40.796749  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:40:40.796756  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:40:40.796823  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:40:40.826580  544774 cri.go:89] found id: ""
	I1206 11:40:40.826601  544774 logs.go:282] 0 containers: []
	W1206 11:40:40.826609  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:40:40.826616  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:40:40.826674  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:40:40.854242  544774 cri.go:89] found id: ""
	I1206 11:40:40.854269  544774 logs.go:282] 0 containers: []
	W1206 11:40:40.854279  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:40:40.854286  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:40:40.854350  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:40:40.885162  544774 cri.go:89] found id: ""
	I1206 11:40:40.885195  544774 logs.go:282] 0 containers: []
	W1206 11:40:40.885205  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:40:40.885212  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:40:40.885283  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:40:40.915688  544774 cri.go:89] found id: ""
	I1206 11:40:40.915716  544774 logs.go:282] 0 containers: []
	W1206 11:40:40.915725  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:40:40.915731  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:40:40.915797  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:40:40.942604  544774 cri.go:89] found id: ""
	I1206 11:40:40.942639  544774 logs.go:282] 0 containers: []
	W1206 11:40:40.942648  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:40:40.942655  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:40:40.942725  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:40:40.967872  544774 cri.go:89] found id: ""
	I1206 11:40:40.967898  544774 logs.go:282] 0 containers: []
	W1206 11:40:40.967907  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:40:40.967914  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:40:40.968005  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:40:40.997453  544774 cri.go:89] found id: ""
	I1206 11:40:40.997522  544774 logs.go:282] 0 containers: []
	W1206 11:40:40.997538  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:40:40.997548  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:40:40.997560  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:40:41.068747  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:40:41.068783  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:40:41.085818  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:40:41.085850  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:40:41.153976  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:40:41.153995  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:40:41.154010  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:40:41.184665  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:40:41.184699  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:40:43.717193  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:40:43.728488  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:40:43.728564  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:40:43.759699  544774 cri.go:89] found id: ""
	I1206 11:40:43.759722  544774 logs.go:282] 0 containers: []
	W1206 11:40:43.759730  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:40:43.759737  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:40:43.759797  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:40:43.792839  544774 cri.go:89] found id: ""
	I1206 11:40:43.792866  544774 logs.go:282] 0 containers: []
	W1206 11:40:43.792874  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:40:43.792880  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:40:43.792943  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:40:43.818299  544774 cri.go:89] found id: ""
	I1206 11:40:43.818327  544774 logs.go:282] 0 containers: []
	W1206 11:40:43.818337  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:40:43.818343  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:40:43.818404  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:40:43.844363  544774 cri.go:89] found id: ""
	I1206 11:40:43.844416  544774 logs.go:282] 0 containers: []
	W1206 11:40:43.844469  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:40:43.844477  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:40:43.844543  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:40:43.870261  544774 cri.go:89] found id: ""
	I1206 11:40:43.870287  544774 logs.go:282] 0 containers: []
	W1206 11:40:43.870297  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:40:43.870303  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:40:43.870366  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:40:43.896445  544774 cri.go:89] found id: ""
	I1206 11:40:43.896474  544774 logs.go:282] 0 containers: []
	W1206 11:40:43.896483  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:40:43.896490  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:40:43.896548  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:40:43.921335  544774 cri.go:89] found id: ""
	I1206 11:40:43.921403  544774 logs.go:282] 0 containers: []
	W1206 11:40:43.921427  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:40:43.921442  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:40:43.921518  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:40:43.947296  544774 cri.go:89] found id: ""
	I1206 11:40:43.947340  544774 logs.go:282] 0 containers: []
	W1206 11:40:43.947349  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:40:43.947358  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:40:43.947408  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:40:44.015101  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:40:44.015145  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:40:44.033005  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:40:44.033036  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:40:44.100675  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:40:44.100700  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:40:44.100714  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:40:44.137614  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:40:44.137655  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:40:46.667776  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:40:46.678434  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:40:46.678513  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:40:46.705942  544774 cri.go:89] found id: ""
	I1206 11:40:46.705971  544774 logs.go:282] 0 containers: []
	W1206 11:40:46.705979  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:40:46.705986  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:40:46.706053  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:40:46.751431  544774 cri.go:89] found id: ""
	I1206 11:40:46.751461  544774 logs.go:282] 0 containers: []
	W1206 11:40:46.751470  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:40:46.751483  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:40:46.751548  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:40:46.789957  544774 cri.go:89] found id: ""
	I1206 11:40:46.789985  544774 logs.go:282] 0 containers: []
	W1206 11:40:46.789994  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:40:46.790001  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:40:46.790061  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:40:46.825368  544774 cri.go:89] found id: ""
	I1206 11:40:46.825395  544774 logs.go:282] 0 containers: []
	W1206 11:40:46.825405  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:40:46.825411  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:40:46.825471  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:40:46.854842  544774 cri.go:89] found id: ""
	I1206 11:40:46.854878  544774 logs.go:282] 0 containers: []
	W1206 11:40:46.854887  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:40:46.854894  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:40:46.854977  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:40:46.880893  544774 cri.go:89] found id: ""
	I1206 11:40:46.880916  544774 logs.go:282] 0 containers: []
	W1206 11:40:46.880925  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:40:46.880932  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:40:46.880991  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:40:46.908609  544774 cri.go:89] found id: ""
	I1206 11:40:46.908644  544774 logs.go:282] 0 containers: []
	W1206 11:40:46.908653  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:40:46.908659  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:40:46.908765  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:40:46.937061  544774 cri.go:89] found id: ""
	I1206 11:40:46.937136  544774 logs.go:282] 0 containers: []
	W1206 11:40:46.937159  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:40:46.937173  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:40:46.937185  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:40:47.007211  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:40:47.007261  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:40:47.025358  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:40:47.025389  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:40:47.093422  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:40:47.093441  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:40:47.093456  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:40:47.125461  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:40:47.125501  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:40:49.656197  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:40:49.666799  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:40:49.666872  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:40:49.692296  544774 cri.go:89] found id: ""
	I1206 11:40:49.692319  544774 logs.go:282] 0 containers: []
	W1206 11:40:49.692327  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:40:49.692333  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:40:49.692395  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:40:49.724969  544774 cri.go:89] found id: ""
	I1206 11:40:49.724992  544774 logs.go:282] 0 containers: []
	W1206 11:40:49.725001  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:40:49.725008  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:40:49.725070  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:40:49.760083  544774 cri.go:89] found id: ""
	I1206 11:40:49.760108  544774 logs.go:282] 0 containers: []
	W1206 11:40:49.760117  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:40:49.760123  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:40:49.760182  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:40:49.792313  544774 cri.go:89] found id: ""
	I1206 11:40:49.792384  544774 logs.go:282] 0 containers: []
	W1206 11:40:49.792396  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:40:49.792403  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:40:49.792462  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:40:49.818448  544774 cri.go:89] found id: ""
	I1206 11:40:49.818527  544774 logs.go:282] 0 containers: []
	W1206 11:40:49.818550  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:40:49.818568  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:40:49.818673  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:40:49.844631  544774 cri.go:89] found id: ""
	I1206 11:40:49.844654  544774 logs.go:282] 0 containers: []
	W1206 11:40:49.844663  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:40:49.844670  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:40:49.844760  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:40:49.872694  544774 cri.go:89] found id: ""
	I1206 11:40:49.872721  544774 logs.go:282] 0 containers: []
	W1206 11:40:49.872730  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:40:49.872737  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:40:49.872796  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:40:49.898986  544774 cri.go:89] found id: ""
	I1206 11:40:49.899008  544774 logs.go:282] 0 containers: []
	W1206 11:40:49.899016  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:40:49.899025  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:40:49.899036  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:40:49.966551  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:40:49.966590  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:40:49.984074  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:40:49.984104  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:40:50.057052  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:40:50.057074  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:40:50.057094  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:40:50.089074  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:40:50.089112  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:40:52.629067  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:40:52.639821  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:40:52.639899  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:40:52.667578  544774 cri.go:89] found id: ""
	I1206 11:40:52.667600  544774 logs.go:282] 0 containers: []
	W1206 11:40:52.667608  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:40:52.667621  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:40:52.667680  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:40:52.693065  544774 cri.go:89] found id: ""
	I1206 11:40:52.693089  544774 logs.go:282] 0 containers: []
	W1206 11:40:52.693097  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:40:52.693103  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:40:52.693162  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:40:52.723269  544774 cri.go:89] found id: ""
	I1206 11:40:52.723298  544774 logs.go:282] 0 containers: []
	W1206 11:40:52.723308  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:40:52.723314  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:40:52.723397  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:40:52.762463  544774 cri.go:89] found id: ""
	I1206 11:40:52.762486  544774 logs.go:282] 0 containers: []
	W1206 11:40:52.762495  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:40:52.762501  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:40:52.762558  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:40:52.797823  544774 cri.go:89] found id: ""
	I1206 11:40:52.797845  544774 logs.go:282] 0 containers: []
	W1206 11:40:52.797854  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:40:52.797860  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:40:52.797926  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:40:52.823943  544774 cri.go:89] found id: ""
	I1206 11:40:52.823967  544774 logs.go:282] 0 containers: []
	W1206 11:40:52.823977  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:40:52.823983  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:40:52.824041  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:40:52.850071  544774 cri.go:89] found id: ""
	I1206 11:40:52.850098  544774 logs.go:282] 0 containers: []
	W1206 11:40:52.850108  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:40:52.850115  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:40:52.850180  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:40:52.877040  544774 cri.go:89] found id: ""
	I1206 11:40:52.877064  544774 logs.go:282] 0 containers: []
	W1206 11:40:52.877073  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:40:52.877083  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:40:52.877095  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:40:52.945385  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:40:52.945404  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:40:52.945423  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:40:52.976562  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:40:52.976597  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:40:53.008410  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:40:53.008453  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:40:53.081191  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:40:53.081272  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:40:55.598897  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:40:55.608974  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:40:55.609057  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:40:55.635185  544774 cri.go:89] found id: ""
	I1206 11:40:55.635210  544774 logs.go:282] 0 containers: []
	W1206 11:40:55.635219  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:40:55.635225  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:40:55.635286  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:40:55.661115  544774 cri.go:89] found id: ""
	I1206 11:40:55.661141  544774 logs.go:282] 0 containers: []
	W1206 11:40:55.661150  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:40:55.661157  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:40:55.661229  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:40:55.690425  544774 cri.go:89] found id: ""
	I1206 11:40:55.690452  544774 logs.go:282] 0 containers: []
	W1206 11:40:55.690461  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:40:55.690468  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:40:55.690537  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:40:55.715124  544774 cri.go:89] found id: ""
	I1206 11:40:55.715152  544774 logs.go:282] 0 containers: []
	W1206 11:40:55.715161  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:40:55.715167  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:40:55.715225  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:40:55.748709  544774 cri.go:89] found id: ""
	I1206 11:40:55.748734  544774 logs.go:282] 0 containers: []
	W1206 11:40:55.748744  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:40:55.748750  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:40:55.748817  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:40:55.780293  544774 cri.go:89] found id: ""
	I1206 11:40:55.780319  544774 logs.go:282] 0 containers: []
	W1206 11:40:55.780328  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:40:55.780334  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:40:55.780396  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:40:55.807865  544774 cri.go:89] found id: ""
	I1206 11:40:55.807894  544774 logs.go:282] 0 containers: []
	W1206 11:40:55.807913  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:40:55.807919  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:40:55.807978  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:40:55.832473  544774 cri.go:89] found id: ""
	I1206 11:40:55.832498  544774 logs.go:282] 0 containers: []
	W1206 11:40:55.832507  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:40:55.832516  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:40:55.832557  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:40:55.868017  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:40:55.868056  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:40:55.900740  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:40:55.900770  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:40:55.969984  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:40:55.970021  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:40:55.988399  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:40:55.988436  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:40:56.059423  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:40:58.560334  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:40:58.570696  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:40:58.570771  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:40:58.596198  544774 cri.go:89] found id: ""
	I1206 11:40:58.596221  544774 logs.go:282] 0 containers: []
	W1206 11:40:58.596230  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:40:58.596236  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:40:58.596297  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:40:58.621715  544774 cri.go:89] found id: ""
	I1206 11:40:58.621738  544774 logs.go:282] 0 containers: []
	W1206 11:40:58.621746  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:40:58.621752  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:40:58.621814  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:40:58.651199  544774 cri.go:89] found id: ""
	I1206 11:40:58.651231  544774 logs.go:282] 0 containers: []
	W1206 11:40:58.651244  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:40:58.651250  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:40:58.651318  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:40:58.679196  544774 cri.go:89] found id: ""
	I1206 11:40:58.679219  544774 logs.go:282] 0 containers: []
	W1206 11:40:58.679228  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:40:58.679235  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:40:58.679292  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:40:58.710089  544774 cri.go:89] found id: ""
	I1206 11:40:58.710112  544774 logs.go:282] 0 containers: []
	W1206 11:40:58.710120  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:40:58.710126  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:40:58.710188  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:40:58.739262  544774 cri.go:89] found id: ""
	I1206 11:40:58.739283  544774 logs.go:282] 0 containers: []
	W1206 11:40:58.739292  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:40:58.739298  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:40:58.739358  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:40:58.771681  544774 cri.go:89] found id: ""
	I1206 11:40:58.771702  544774 logs.go:282] 0 containers: []
	W1206 11:40:58.771710  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:40:58.771716  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:40:58.771772  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:40:58.805769  544774 cri.go:89] found id: ""
	I1206 11:40:58.805791  544774 logs.go:282] 0 containers: []
	W1206 11:40:58.805800  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:40:58.805809  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:40:58.805819  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:40:58.836238  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:40:58.836270  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:40:58.864897  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:40:58.864924  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:40:58.935045  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:40:58.935083  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:40:58.956094  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:40:58.956124  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:40:59.026615  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:41:01.528310  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:41:01.539288  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:41:01.539358  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:41:01.565445  544774 cri.go:89] found id: ""
	I1206 11:41:01.565479  544774 logs.go:282] 0 containers: []
	W1206 11:41:01.565489  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:41:01.565496  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:41:01.565560  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:41:01.592028  544774 cri.go:89] found id: ""
	I1206 11:41:01.592052  544774 logs.go:282] 0 containers: []
	W1206 11:41:01.592061  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:41:01.592068  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:41:01.592132  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:41:01.618703  544774 cri.go:89] found id: ""
	I1206 11:41:01.618732  544774 logs.go:282] 0 containers: []
	W1206 11:41:01.618741  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:41:01.618748  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:41:01.618813  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:41:01.647001  544774 cri.go:89] found id: ""
	I1206 11:41:01.647026  544774 logs.go:282] 0 containers: []
	W1206 11:41:01.647035  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:41:01.647041  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:41:01.647101  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:41:01.676673  544774 cri.go:89] found id: ""
	I1206 11:41:01.676696  544774 logs.go:282] 0 containers: []
	W1206 11:41:01.676706  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:41:01.676712  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:41:01.676772  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:41:01.703135  544774 cri.go:89] found id: ""
	I1206 11:41:01.703156  544774 logs.go:282] 0 containers: []
	W1206 11:41:01.703164  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:41:01.703172  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:41:01.703230  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:41:01.733902  544774 cri.go:89] found id: ""
	I1206 11:41:01.733923  544774 logs.go:282] 0 containers: []
	W1206 11:41:01.733938  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:41:01.733944  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:41:01.734007  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:41:01.762151  544774 cri.go:89] found id: ""
	I1206 11:41:01.762172  544774 logs.go:282] 0 containers: []
	W1206 11:41:01.762181  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:41:01.762189  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:41:01.762200  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:41:01.840521  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:41:01.840560  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:41:01.858706  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:41:01.858742  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:41:01.926415  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:41:01.926439  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:41:01.926452  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:41:01.961216  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:41:01.961254  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:41:04.491495  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:41:04.504172  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:41:04.504237  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:41:04.535171  544774 cri.go:89] found id: ""
	I1206 11:41:04.535199  544774 logs.go:282] 0 containers: []
	W1206 11:41:04.535209  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:41:04.535216  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:41:04.535283  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:41:04.562370  544774 cri.go:89] found id: ""
	I1206 11:41:04.562395  544774 logs.go:282] 0 containers: []
	W1206 11:41:04.562403  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:41:04.562409  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:41:04.562472  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:41:04.594110  544774 cri.go:89] found id: ""
	I1206 11:41:04.594133  544774 logs.go:282] 0 containers: []
	W1206 11:41:04.594141  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:41:04.594147  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:41:04.594211  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:41:04.620417  544774 cri.go:89] found id: ""
	I1206 11:41:04.620445  544774 logs.go:282] 0 containers: []
	W1206 11:41:04.620455  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:41:04.620461  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:41:04.620524  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:41:04.646225  544774 cri.go:89] found id: ""
	I1206 11:41:04.646251  544774 logs.go:282] 0 containers: []
	W1206 11:41:04.646260  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:41:04.646267  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:41:04.646325  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:41:04.672451  544774 cri.go:89] found id: ""
	I1206 11:41:04.672472  544774 logs.go:282] 0 containers: []
	W1206 11:41:04.672481  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:41:04.672487  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:41:04.672545  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:41:04.702379  544774 cri.go:89] found id: ""
	I1206 11:41:04.702404  544774 logs.go:282] 0 containers: []
	W1206 11:41:04.702411  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:41:04.702418  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:41:04.702474  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:41:04.736107  544774 cri.go:89] found id: ""
	I1206 11:41:04.736131  544774 logs.go:282] 0 containers: []
	W1206 11:41:04.736139  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:41:04.736148  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:41:04.736159  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:41:04.817096  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:41:04.817138  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:41:04.834479  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:41:04.834511  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:41:04.901308  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:41:04.901330  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:41:04.901345  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:41:04.933395  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:41:04.933431  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:41:07.464895  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:41:07.476381  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:41:07.476447  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:41:07.518342  544774 cri.go:89] found id: ""
	I1206 11:41:07.518367  544774 logs.go:282] 0 containers: []
	W1206 11:41:07.518375  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:41:07.518384  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:41:07.518446  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:41:07.549806  544774 cri.go:89] found id: ""
	I1206 11:41:07.549829  544774 logs.go:282] 0 containers: []
	W1206 11:41:07.549837  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:41:07.549843  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:41:07.549906  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:41:07.587711  544774 cri.go:89] found id: ""
	I1206 11:41:07.587733  544774 logs.go:282] 0 containers: []
	W1206 11:41:07.587742  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:41:07.587748  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:41:07.587810  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:41:07.626335  544774 cri.go:89] found id: ""
	I1206 11:41:07.626358  544774 logs.go:282] 0 containers: []
	W1206 11:41:07.626367  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:41:07.626373  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:41:07.626434  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:41:07.655477  544774 cri.go:89] found id: ""
	I1206 11:41:07.655502  544774 logs.go:282] 0 containers: []
	W1206 11:41:07.655511  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:41:07.655517  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:41:07.655584  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:41:07.700635  544774 cri.go:89] found id: ""
	I1206 11:41:07.700659  544774 logs.go:282] 0 containers: []
	W1206 11:41:07.700667  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:41:07.700673  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:41:07.700738  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:41:07.745674  544774 cri.go:89] found id: ""
	I1206 11:41:07.745698  544774 logs.go:282] 0 containers: []
	W1206 11:41:07.745710  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:41:07.745716  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:41:07.745780  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:41:07.816397  544774 cri.go:89] found id: ""
	I1206 11:41:07.816420  544774 logs.go:282] 0 containers: []
	W1206 11:41:07.816428  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:41:07.816436  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:41:07.816448  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:41:07.909286  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:41:07.909304  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:41:07.909316  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:41:07.952758  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:41:07.952802  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:41:07.986678  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:41:07.986714  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:41:08.061641  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:41:08.061684  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:41:10.580701  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:41:10.594099  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:41:10.594241  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:41:10.634421  544774 cri.go:89] found id: ""
	I1206 11:41:10.634486  544774 logs.go:282] 0 containers: []
	W1206 11:41:10.634509  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:41:10.634527  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:41:10.634598  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:41:10.664411  544774 cri.go:89] found id: ""
	I1206 11:41:10.664483  544774 logs.go:282] 0 containers: []
	W1206 11:41:10.664516  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:41:10.664537  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:41:10.664612  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:41:10.708795  544774 cri.go:89] found id: ""
	I1206 11:41:10.708862  544774 logs.go:282] 0 containers: []
	W1206 11:41:10.708885  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:41:10.708902  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:41:10.708980  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:41:10.785100  544774 cri.go:89] found id: ""
	I1206 11:41:10.785163  544774 logs.go:282] 0 containers: []
	W1206 11:41:10.785185  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:41:10.785203  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:41:10.785274  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:41:10.843465  544774 cri.go:89] found id: ""
	I1206 11:41:10.843533  544774 logs.go:282] 0 containers: []
	W1206 11:41:10.843557  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:41:10.843575  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:41:10.843661  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:41:10.881216  544774 cri.go:89] found id: ""
	I1206 11:41:10.881281  544774 logs.go:282] 0 containers: []
	W1206 11:41:10.881303  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:41:10.881322  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:41:10.881396  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:41:10.917585  544774 cri.go:89] found id: ""
	I1206 11:41:10.917659  544774 logs.go:282] 0 containers: []
	W1206 11:41:10.917682  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:41:10.917701  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:41:10.917772  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:41:10.977208  544774 cri.go:89] found id: ""
	I1206 11:41:10.977275  544774 logs.go:282] 0 containers: []
	W1206 11:41:10.977297  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:41:10.977318  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:41:10.977356  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:41:11.090247  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:41:11.090319  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:41:11.090347  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:41:11.126067  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:41:11.126112  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:41:11.162263  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:41:11.162290  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:41:11.241185  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:41:11.241265  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:41:13.759545  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:41:13.771118  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:41:13.771190  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:41:13.802612  544774 cri.go:89] found id: ""
	I1206 11:41:13.802640  544774 logs.go:282] 0 containers: []
	W1206 11:41:13.802650  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:41:13.802657  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:41:13.802715  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:41:13.828462  544774 cri.go:89] found id: ""
	I1206 11:41:13.828490  544774 logs.go:282] 0 containers: []
	W1206 11:41:13.828499  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:41:13.828505  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:41:13.828565  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:41:13.854009  544774 cri.go:89] found id: ""
	I1206 11:41:13.854036  544774 logs.go:282] 0 containers: []
	W1206 11:41:13.854046  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:41:13.854052  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:41:13.854113  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:41:13.880119  544774 cri.go:89] found id: ""
	I1206 11:41:13.880145  544774 logs.go:282] 0 containers: []
	W1206 11:41:13.880154  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:41:13.880160  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:41:13.880220  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:41:13.906516  544774 cri.go:89] found id: ""
	I1206 11:41:13.906543  544774 logs.go:282] 0 containers: []
	W1206 11:41:13.906552  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:41:13.906558  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:41:13.906666  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:41:13.939210  544774 cri.go:89] found id: ""
	I1206 11:41:13.939231  544774 logs.go:282] 0 containers: []
	W1206 11:41:13.939239  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:41:13.939246  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:41:13.939321  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:41:13.981084  544774 cri.go:89] found id: ""
	I1206 11:41:13.981116  544774 logs.go:282] 0 containers: []
	W1206 11:41:13.981124  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:41:13.981130  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:41:13.981189  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:41:14.021903  544774 cri.go:89] found id: ""
	I1206 11:41:14.021925  544774 logs.go:282] 0 containers: []
	W1206 11:41:14.021934  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:41:14.021943  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:41:14.021954  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:41:14.061154  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:41:14.061181  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:41:14.143363  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:41:14.143462  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:41:14.162811  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:41:14.162838  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:41:14.248854  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:41:14.248927  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:41:14.248954  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:41:16.791503  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:41:16.801526  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:41:16.801596  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:41:16.830212  544774 cri.go:89] found id: ""
	I1206 11:41:16.830234  544774 logs.go:282] 0 containers: []
	W1206 11:41:16.830243  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:41:16.830249  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:41:16.830308  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:41:16.856438  544774 cri.go:89] found id: ""
	I1206 11:41:16.856462  544774 logs.go:282] 0 containers: []
	W1206 11:41:16.856471  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:41:16.856477  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:41:16.856537  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:41:16.881383  544774 cri.go:89] found id: ""
	I1206 11:41:16.881408  544774 logs.go:282] 0 containers: []
	W1206 11:41:16.881418  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:41:16.881425  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:41:16.881484  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:41:16.906983  544774 cri.go:89] found id: ""
	I1206 11:41:16.907008  544774 logs.go:282] 0 containers: []
	W1206 11:41:16.907017  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:41:16.907024  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:41:16.907136  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:41:16.934594  544774 cri.go:89] found id: ""
	I1206 11:41:16.934619  544774 logs.go:282] 0 containers: []
	W1206 11:41:16.934631  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:41:16.934637  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:41:16.934699  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:41:16.966381  544774 cri.go:89] found id: ""
	I1206 11:41:16.966409  544774 logs.go:282] 0 containers: []
	W1206 11:41:16.966419  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:41:16.966426  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:41:16.966497  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:41:16.994192  544774 cri.go:89] found id: ""
	I1206 11:41:16.994219  544774 logs.go:282] 0 containers: []
	W1206 11:41:16.994229  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:41:16.994236  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:41:16.994299  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:41:17.025761  544774 cri.go:89] found id: ""
	I1206 11:41:17.025785  544774 logs.go:282] 0 containers: []
	W1206 11:41:17.025794  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:41:17.025803  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:41:17.025845  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:41:17.090474  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:41:17.090493  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:41:17.090531  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:41:17.121437  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:41:17.121472  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:41:17.151534  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:41:17.151562  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:41:17.218906  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:41:17.218946  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:41:19.735528  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:41:19.747997  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:41:19.748076  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:41:19.776622  544774 cri.go:89] found id: ""
	I1206 11:41:19.776649  544774 logs.go:282] 0 containers: []
	W1206 11:41:19.776658  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:41:19.776666  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:41:19.776726  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:41:19.807615  544774 cri.go:89] found id: ""
	I1206 11:41:19.807638  544774 logs.go:282] 0 containers: []
	W1206 11:41:19.807646  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:41:19.807653  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:41:19.807714  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:41:19.837108  544774 cri.go:89] found id: ""
	I1206 11:41:19.837134  544774 logs.go:282] 0 containers: []
	W1206 11:41:19.837143  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:41:19.837149  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:41:19.837210  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:41:19.863188  544774 cri.go:89] found id: ""
	I1206 11:41:19.863209  544774 logs.go:282] 0 containers: []
	W1206 11:41:19.863218  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:41:19.863224  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:41:19.863288  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:41:19.889295  544774 cri.go:89] found id: ""
	I1206 11:41:19.889321  544774 logs.go:282] 0 containers: []
	W1206 11:41:19.889330  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:41:19.889336  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:41:19.889444  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:41:19.918310  544774 cri.go:89] found id: ""
	I1206 11:41:19.918336  544774 logs.go:282] 0 containers: []
	W1206 11:41:19.918344  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:41:19.918352  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:41:19.918416  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:41:19.949028  544774 cri.go:89] found id: ""
	I1206 11:41:19.949050  544774 logs.go:282] 0 containers: []
	W1206 11:41:19.949059  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:41:19.949066  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:41:19.949126  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:41:19.977103  544774 cri.go:89] found id: ""
	I1206 11:41:19.977128  544774 logs.go:282] 0 containers: []
	W1206 11:41:19.977137  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:41:19.977146  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:41:19.977157  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:41:20.010669  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:41:20.010699  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:41:20.096022  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:41:20.096061  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:41:20.113697  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:41:20.113728  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:41:20.184880  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:41:20.184901  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:41:20.184914  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:41:22.717762  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:41:22.728903  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:41:22.728976  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:41:22.757150  544774 cri.go:89] found id: ""
	I1206 11:41:22.757176  544774 logs.go:282] 0 containers: []
	W1206 11:41:22.757185  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:41:22.757192  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:41:22.757251  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:41:22.786753  544774 cri.go:89] found id: ""
	I1206 11:41:22.786779  544774 logs.go:282] 0 containers: []
	W1206 11:41:22.786788  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:41:22.786794  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:41:22.786910  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:41:22.816555  544774 cri.go:89] found id: ""
	I1206 11:41:22.816582  544774 logs.go:282] 0 containers: []
	W1206 11:41:22.816591  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:41:22.816598  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:41:22.816657  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:41:22.842470  544774 cri.go:89] found id: ""
	I1206 11:41:22.842496  544774 logs.go:282] 0 containers: []
	W1206 11:41:22.842506  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:41:22.842513  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:41:22.842595  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:41:22.870261  544774 cri.go:89] found id: ""
	I1206 11:41:22.870288  544774 logs.go:282] 0 containers: []
	W1206 11:41:22.870296  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:41:22.870302  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:41:22.870362  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:41:22.895807  544774 cri.go:89] found id: ""
	I1206 11:41:22.895831  544774 logs.go:282] 0 containers: []
	W1206 11:41:22.895840  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:41:22.895847  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:41:22.895906  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:41:22.921364  544774 cri.go:89] found id: ""
	I1206 11:41:22.921389  544774 logs.go:282] 0 containers: []
	W1206 11:41:22.921397  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:41:22.921404  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:41:22.921465  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:41:22.957595  544774 cri.go:89] found id: ""
	I1206 11:41:22.957622  544774 logs.go:282] 0 containers: []
	W1206 11:41:22.957630  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:41:22.957639  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:41:22.957681  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:41:23.035159  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:41:23.035209  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:41:23.052094  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:41:23.052124  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:41:23.121949  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:41:23.121970  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:41:23.121983  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:41:23.153937  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:41:23.153973  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:41:25.686527  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:41:25.697112  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:41:25.697185  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:41:25.738401  544774 cri.go:89] found id: ""
	I1206 11:41:25.738424  544774 logs.go:282] 0 containers: []
	W1206 11:41:25.738433  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:41:25.738439  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:41:25.738499  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:41:25.765624  544774 cri.go:89] found id: ""
	I1206 11:41:25.765649  544774 logs.go:282] 0 containers: []
	W1206 11:41:25.765658  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:41:25.765664  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:41:25.765725  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:41:25.801004  544774 cri.go:89] found id: ""
	I1206 11:41:25.801030  544774 logs.go:282] 0 containers: []
	W1206 11:41:25.801038  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:41:25.801045  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:41:25.801103  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:41:25.826913  544774 cri.go:89] found id: ""
	I1206 11:41:25.826943  544774 logs.go:282] 0 containers: []
	W1206 11:41:25.826951  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:41:25.826958  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:41:25.827018  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:41:25.852749  544774 cri.go:89] found id: ""
	I1206 11:41:25.852774  544774 logs.go:282] 0 containers: []
	W1206 11:41:25.852794  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:41:25.852801  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:41:25.852904  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:41:25.879148  544774 cri.go:89] found id: ""
	I1206 11:41:25.879175  544774 logs.go:282] 0 containers: []
	W1206 11:41:25.879186  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:41:25.879193  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:41:25.879252  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:41:25.905659  544774 cri.go:89] found id: ""
	I1206 11:41:25.905687  544774 logs.go:282] 0 containers: []
	W1206 11:41:25.905697  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:41:25.905704  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:41:25.905768  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:41:25.932595  544774 cri.go:89] found id: ""
	I1206 11:41:25.932625  544774 logs.go:282] 0 containers: []
	W1206 11:41:25.932634  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:41:25.932643  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:41:25.932656  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:41:26.010794  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:41:26.010848  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:41:26.036013  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:41:26.036046  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:41:26.106301  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:41:26.106327  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:41:26.106342  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:41:26.138605  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:41:26.138641  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:41:28.668884  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:41:28.679361  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:41:28.679453  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:41:28.708350  544774 cri.go:89] found id: ""
	I1206 11:41:28.708386  544774 logs.go:282] 0 containers: []
	W1206 11:41:28.708395  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:41:28.708401  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:41:28.708462  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:41:28.753194  544774 cri.go:89] found id: ""
	I1206 11:41:28.753221  544774 logs.go:282] 0 containers: []
	W1206 11:41:28.753230  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:41:28.753236  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:41:28.753297  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:41:28.783200  544774 cri.go:89] found id: ""
	I1206 11:41:28.783228  544774 logs.go:282] 0 containers: []
	W1206 11:41:28.783237  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:41:28.783243  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:41:28.783331  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:41:28.816078  544774 cri.go:89] found id: ""
	I1206 11:41:28.816109  544774 logs.go:282] 0 containers: []
	W1206 11:41:28.816117  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:41:28.816123  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:41:28.816206  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:41:28.841448  544774 cri.go:89] found id: ""
	I1206 11:41:28.841474  544774 logs.go:282] 0 containers: []
	W1206 11:41:28.841483  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:41:28.841528  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:41:28.841603  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:41:28.867537  544774 cri.go:89] found id: ""
	I1206 11:41:28.867566  544774 logs.go:282] 0 containers: []
	W1206 11:41:28.867575  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:41:28.867581  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:41:28.867645  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:41:28.894433  544774 cri.go:89] found id: ""
	I1206 11:41:28.894467  544774 logs.go:282] 0 containers: []
	W1206 11:41:28.894476  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:41:28.894483  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:41:28.894555  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:41:28.920678  544774 cri.go:89] found id: ""
	I1206 11:41:28.920702  544774 logs.go:282] 0 containers: []
	W1206 11:41:28.920710  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:41:28.920719  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:41:28.920731  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:41:28.993143  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:41:28.993181  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:41:29.010596  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:41:29.010626  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:41:29.079687  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:41:29.079712  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:41:29.079725  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:41:29.112545  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:41:29.112578  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:41:31.641077  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:41:31.651320  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:41:31.651428  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:41:31.678409  544774 cri.go:89] found id: ""
	I1206 11:41:31.678435  544774 logs.go:282] 0 containers: []
	W1206 11:41:31.678445  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:41:31.678451  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:41:31.678554  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:41:31.706027  544774 cri.go:89] found id: ""
	I1206 11:41:31.706060  544774 logs.go:282] 0 containers: []
	W1206 11:41:31.706069  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:41:31.706075  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:41:31.706142  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:41:31.744890  544774 cri.go:89] found id: ""
	I1206 11:41:31.744925  544774 logs.go:282] 0 containers: []
	W1206 11:41:31.744935  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:41:31.744944  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:41:31.745019  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:41:31.772547  544774 cri.go:89] found id: ""
	I1206 11:41:31.772582  544774 logs.go:282] 0 containers: []
	W1206 11:41:31.772591  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:41:31.772598  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:41:31.772669  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:41:31.802101  544774 cri.go:89] found id: ""
	I1206 11:41:31.802128  544774 logs.go:282] 0 containers: []
	W1206 11:41:31.802137  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:41:31.802143  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:41:31.802205  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:41:31.833617  544774 cri.go:89] found id: ""
	I1206 11:41:31.833698  544774 logs.go:282] 0 containers: []
	W1206 11:41:31.833720  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:41:31.833728  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:41:31.833790  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:41:31.866032  544774 cri.go:89] found id: ""
	I1206 11:41:31.866067  544774 logs.go:282] 0 containers: []
	W1206 11:41:31.866075  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:41:31.866082  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:41:31.866183  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:41:31.892358  544774 cri.go:89] found id: ""
	I1206 11:41:31.892382  544774 logs.go:282] 0 containers: []
	W1206 11:41:31.892392  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:41:31.892402  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:41:31.892414  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:41:31.964905  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:41:31.964927  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:41:31.964946  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:41:32.000524  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:41:32.000572  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:41:32.049115  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:41:32.049157  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:41:32.121291  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:41:32.121327  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:41:34.639548  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:41:34.649906  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:41:34.649977  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:41:34.676446  544774 cri.go:89] found id: ""
	I1206 11:41:34.676479  544774 logs.go:282] 0 containers: []
	W1206 11:41:34.676489  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:41:34.676495  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:41:34.676559  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:41:34.702904  544774 cri.go:89] found id: ""
	I1206 11:41:34.702937  544774 logs.go:282] 0 containers: []
	W1206 11:41:34.702946  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:41:34.702952  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:41:34.703012  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:41:34.734879  544774 cri.go:89] found id: ""
	I1206 11:41:34.734905  544774 logs.go:282] 0 containers: []
	W1206 11:41:34.734914  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:41:34.734921  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:41:34.734989  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:41:34.768896  544774 cri.go:89] found id: ""
	I1206 11:41:34.768921  544774 logs.go:282] 0 containers: []
	W1206 11:41:34.768930  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:41:34.768937  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:41:34.768995  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:41:34.798610  544774 cri.go:89] found id: ""
	I1206 11:41:34.798635  544774 logs.go:282] 0 containers: []
	W1206 11:41:34.798644  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:41:34.798651  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:41:34.798712  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:41:34.829169  544774 cri.go:89] found id: ""
	I1206 11:41:34.829194  544774 logs.go:282] 0 containers: []
	W1206 11:41:34.829203  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:41:34.829210  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:41:34.829297  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:41:34.855408  544774 cri.go:89] found id: ""
	I1206 11:41:34.855432  544774 logs.go:282] 0 containers: []
	W1206 11:41:34.855441  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:41:34.855447  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:41:34.855538  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:41:34.882556  544774 cri.go:89] found id: ""
	I1206 11:41:34.882583  544774 logs.go:282] 0 containers: []
	W1206 11:41:34.882592  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:41:34.882601  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:41:34.882613  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:41:34.913925  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:41:34.913957  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:41:34.948018  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:41:34.948045  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:41:35.017923  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:41:35.017968  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:41:35.035741  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:41:35.035771  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:41:35.105920  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:41:37.606202  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:41:37.616669  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:41:37.616742  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:41:37.642940  544774 cri.go:89] found id: ""
	I1206 11:41:37.642964  544774 logs.go:282] 0 containers: []
	W1206 11:41:37.642972  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:41:37.642979  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:41:37.643040  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:41:37.669501  544774 cri.go:89] found id: ""
	I1206 11:41:37.669526  544774 logs.go:282] 0 containers: []
	W1206 11:41:37.669536  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:41:37.669542  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:41:37.669604  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:41:37.700258  544774 cri.go:89] found id: ""
	I1206 11:41:37.700284  544774 logs.go:282] 0 containers: []
	W1206 11:41:37.700292  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:41:37.700298  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:41:37.700357  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:41:37.736038  544774 cri.go:89] found id: ""
	I1206 11:41:37.736065  544774 logs.go:282] 0 containers: []
	W1206 11:41:37.736074  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:41:37.736081  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:41:37.736141  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:41:37.771556  544774 cri.go:89] found id: ""
	I1206 11:41:37.771581  544774 logs.go:282] 0 containers: []
	W1206 11:41:37.771590  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:41:37.771596  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:41:37.771660  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:41:37.800513  544774 cri.go:89] found id: ""
	I1206 11:41:37.800542  544774 logs.go:282] 0 containers: []
	W1206 11:41:37.800551  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:41:37.800558  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:41:37.800619  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:41:37.827039  544774 cri.go:89] found id: ""
	I1206 11:41:37.827072  544774 logs.go:282] 0 containers: []
	W1206 11:41:37.827081  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:41:37.827089  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:41:37.827161  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:41:37.852169  544774 cri.go:89] found id: ""
	I1206 11:41:37.852248  544774 logs.go:282] 0 containers: []
	W1206 11:41:37.852272  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:41:37.852289  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:41:37.852314  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:41:37.882114  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:41:37.882149  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:41:37.920689  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:41:37.920716  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:41:37.994854  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:41:37.994889  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:41:38.017461  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:41:38.017502  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:41:38.088778  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:41:40.589048  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:41:40.599691  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:41:40.599765  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:41:40.625913  544774 cri.go:89] found id: ""
	I1206 11:41:40.625940  544774 logs.go:282] 0 containers: []
	W1206 11:41:40.625949  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:41:40.625956  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:41:40.626017  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:41:40.653145  544774 cri.go:89] found id: ""
	I1206 11:41:40.653168  544774 logs.go:282] 0 containers: []
	W1206 11:41:40.653177  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:41:40.653184  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:41:40.653246  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:41:40.679947  544774 cri.go:89] found id: ""
	I1206 11:41:40.679972  544774 logs.go:282] 0 containers: []
	W1206 11:41:40.679982  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:41:40.679988  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:41:40.680049  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:41:40.710820  544774 cri.go:89] found id: ""
	I1206 11:41:40.710846  544774 logs.go:282] 0 containers: []
	W1206 11:41:40.710855  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:41:40.710861  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:41:40.710925  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:41:40.743906  544774 cri.go:89] found id: ""
	I1206 11:41:40.743932  544774 logs.go:282] 0 containers: []
	W1206 11:41:40.743941  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:41:40.743948  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:41:40.744013  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:41:40.777386  544774 cri.go:89] found id: ""
	I1206 11:41:40.777412  544774 logs.go:282] 0 containers: []
	W1206 11:41:40.777421  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:41:40.777428  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:41:40.777491  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:41:40.811071  544774 cri.go:89] found id: ""
	I1206 11:41:40.811097  544774 logs.go:282] 0 containers: []
	W1206 11:41:40.811106  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:41:40.811114  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:41:40.811180  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:41:40.837246  544774 cri.go:89] found id: ""
	I1206 11:41:40.837271  544774 logs.go:282] 0 containers: []
	W1206 11:41:40.837280  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:41:40.837289  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:41:40.837301  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:41:40.869411  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:41:40.869442  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:41:40.935631  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:41:40.935664  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:41:40.955159  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:41:40.955189  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:41:41.024042  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:41:41.024064  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:41:41.024078  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:41:43.557237  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:41:43.568015  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:41:43.568087  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:41:43.593358  544774 cri.go:89] found id: ""
	I1206 11:41:43.593380  544774 logs.go:282] 0 containers: []
	W1206 11:41:43.593389  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:41:43.593395  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:41:43.593456  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:41:43.622551  544774 cri.go:89] found id: ""
	I1206 11:41:43.622623  544774 logs.go:282] 0 containers: []
	W1206 11:41:43.622637  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:41:43.622643  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:41:43.622703  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:41:43.652695  544774 cri.go:89] found id: ""
	I1206 11:41:43.652720  544774 logs.go:282] 0 containers: []
	W1206 11:41:43.652729  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:41:43.652736  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:41:43.652797  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:41:43.682496  544774 cri.go:89] found id: ""
	I1206 11:41:43.682518  544774 logs.go:282] 0 containers: []
	W1206 11:41:43.682527  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:41:43.682533  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:41:43.682592  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:41:43.710285  544774 cri.go:89] found id: ""
	I1206 11:41:43.710363  544774 logs.go:282] 0 containers: []
	W1206 11:41:43.710387  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:41:43.710406  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:41:43.710487  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:41:43.757890  544774 cri.go:89] found id: ""
	I1206 11:41:43.757965  544774 logs.go:282] 0 containers: []
	W1206 11:41:43.757989  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:41:43.758009  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:41:43.758093  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:41:43.813864  544774 cri.go:89] found id: ""
	I1206 11:41:43.813886  544774 logs.go:282] 0 containers: []
	W1206 11:41:43.813894  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:41:43.813901  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:41:43.813958  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:41:43.858324  544774 cri.go:89] found id: ""
	I1206 11:41:43.858353  544774 logs.go:282] 0 containers: []
	W1206 11:41:43.858363  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:41:43.858372  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:41:43.858386  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:41:43.928269  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:41:43.928309  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:41:43.952179  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:41:43.952214  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:41:44.037223  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:41:44.037250  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:41:44.037263  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:41:44.069608  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:41:44.069643  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:41:46.600564  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:41:46.612987  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:41:46.613056  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:41:46.642085  544774 cri.go:89] found id: ""
	I1206 11:41:46.642109  544774 logs.go:282] 0 containers: []
	W1206 11:41:46.642117  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:41:46.642124  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:41:46.642184  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:41:46.673183  544774 cri.go:89] found id: ""
	I1206 11:41:46.673210  544774 logs.go:282] 0 containers: []
	W1206 11:41:46.673218  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:41:46.673225  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:41:46.673281  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:41:46.700524  544774 cri.go:89] found id: ""
	I1206 11:41:46.700549  544774 logs.go:282] 0 containers: []
	W1206 11:41:46.700558  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:41:46.700565  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:41:46.700629  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:41:46.739094  544774 cri.go:89] found id: ""
	I1206 11:41:46.739120  544774 logs.go:282] 0 containers: []
	W1206 11:41:46.739129  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:41:46.739135  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:41:46.739201  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:41:46.783674  544774 cri.go:89] found id: ""
	I1206 11:41:46.783699  544774 logs.go:282] 0 containers: []
	W1206 11:41:46.783707  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:41:46.783714  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:41:46.783780  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:41:46.858178  544774 cri.go:89] found id: ""
	I1206 11:41:46.858205  544774 logs.go:282] 0 containers: []
	W1206 11:41:46.858215  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:41:46.858222  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:41:46.858284  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:41:46.907813  544774 cri.go:89] found id: ""
	I1206 11:41:46.907840  544774 logs.go:282] 0 containers: []
	W1206 11:41:46.907849  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:41:46.907857  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:41:46.907917  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:41:46.954395  544774 cri.go:89] found id: ""
	I1206 11:41:46.954418  544774 logs.go:282] 0 containers: []
	W1206 11:41:46.954427  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:41:46.954436  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:41:46.954448  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:41:46.998418  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:41:46.998443  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:41:47.085498  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:41:47.085582  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:41:47.102830  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:41:47.102855  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:41:47.197875  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:41:47.197941  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:41:47.197983  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:41:49.735557  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:41:49.746054  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:41:49.746132  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:41:49.774882  544774 cri.go:89] found id: ""
	I1206 11:41:49.774915  544774 logs.go:282] 0 containers: []
	W1206 11:41:49.774927  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:41:49.774935  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:41:49.775003  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:41:49.801398  544774 cri.go:89] found id: ""
	I1206 11:41:49.801422  544774 logs.go:282] 0 containers: []
	W1206 11:41:49.801432  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:41:49.801438  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:41:49.801498  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:41:49.831695  544774 cri.go:89] found id: ""
	I1206 11:41:49.831713  544774 logs.go:282] 0 containers: []
	W1206 11:41:49.831720  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:41:49.831725  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:41:49.831770  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:41:49.860670  544774 cri.go:89] found id: ""
	I1206 11:41:49.860699  544774 logs.go:282] 0 containers: []
	W1206 11:41:49.860709  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:41:49.860716  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:41:49.860775  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:41:49.894274  544774 cri.go:89] found id: ""
	I1206 11:41:49.894301  544774 logs.go:282] 0 containers: []
	W1206 11:41:49.894310  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:41:49.894317  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:41:49.894384  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:41:49.924501  544774 cri.go:89] found id: ""
	I1206 11:41:49.924530  544774 logs.go:282] 0 containers: []
	W1206 11:41:49.924541  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:41:49.924548  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:41:49.924608  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:41:49.961931  544774 cri.go:89] found id: ""
	I1206 11:41:49.961959  544774 logs.go:282] 0 containers: []
	W1206 11:41:49.961968  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:41:49.961974  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:41:49.962033  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:41:50.011908  544774 cri.go:89] found id: ""
	I1206 11:41:50.011940  544774 logs.go:282] 0 containers: []
	W1206 11:41:50.011950  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:41:50.011960  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:41:50.011971  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:41:50.061106  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:41:50.061141  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:41:50.148486  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:41:50.148528  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:41:50.172352  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:41:50.172381  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:41:50.262090  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:41:50.262111  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:41:50.262123  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:41:52.797880  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:41:52.808552  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:41:52.808627  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:41:52.836709  544774 cri.go:89] found id: ""
	I1206 11:41:52.836735  544774 logs.go:282] 0 containers: []
	W1206 11:41:52.836744  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:41:52.836750  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:41:52.836813  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:41:52.862336  544774 cri.go:89] found id: ""
	I1206 11:41:52.862359  544774 logs.go:282] 0 containers: []
	W1206 11:41:52.862368  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:41:52.862374  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:41:52.862432  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:41:52.892614  544774 cri.go:89] found id: ""
	I1206 11:41:52.892642  544774 logs.go:282] 0 containers: []
	W1206 11:41:52.892651  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:41:52.892658  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:41:52.892719  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:41:52.920150  544774 cri.go:89] found id: ""
	I1206 11:41:52.920174  544774 logs.go:282] 0 containers: []
	W1206 11:41:52.920184  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:41:52.920189  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:41:52.920256  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:41:52.953842  544774 cri.go:89] found id: ""
	I1206 11:41:52.953867  544774 logs.go:282] 0 containers: []
	W1206 11:41:52.953877  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:41:52.953883  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:41:52.953950  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:41:52.984256  544774 cri.go:89] found id: ""
	I1206 11:41:52.984285  544774 logs.go:282] 0 containers: []
	W1206 11:41:52.984294  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:41:52.984301  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:41:52.984358  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:41:53.029995  544774 cri.go:89] found id: ""
	I1206 11:41:53.030023  544774 logs.go:282] 0 containers: []
	W1206 11:41:53.030032  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:41:53.030038  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:41:53.030106  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:41:53.058032  544774 cri.go:89] found id: ""
	I1206 11:41:53.058064  544774 logs.go:282] 0 containers: []
	W1206 11:41:53.058078  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:41:53.058087  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:41:53.058099  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:41:53.128690  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:41:53.128731  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:41:53.145866  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:41:53.145896  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:41:53.224182  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:41:53.224204  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:41:53.224217  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:41:53.260809  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:41:53.260846  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:41:55.802885  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:41:55.814002  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:41:55.814076  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:41:55.843750  544774 cri.go:89] found id: ""
	I1206 11:41:55.843776  544774 logs.go:282] 0 containers: []
	W1206 11:41:55.843787  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:41:55.843795  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:41:55.843854  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:41:55.870449  544774 cri.go:89] found id: ""
	I1206 11:41:55.870476  544774 logs.go:282] 0 containers: []
	W1206 11:41:55.870484  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:41:55.870492  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:41:55.870555  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:41:55.896463  544774 cri.go:89] found id: ""
	I1206 11:41:55.896488  544774 logs.go:282] 0 containers: []
	W1206 11:41:55.896498  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:41:55.896505  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:41:55.896565  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:41:55.926667  544774 cri.go:89] found id: ""
	I1206 11:41:55.926694  544774 logs.go:282] 0 containers: []
	W1206 11:41:55.926702  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:41:55.926709  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:41:55.926768  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:41:55.954548  544774 cri.go:89] found id: ""
	I1206 11:41:55.954576  544774 logs.go:282] 0 containers: []
	W1206 11:41:55.954585  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:41:55.954591  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:41:55.954684  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:41:55.983827  544774 cri.go:89] found id: ""
	I1206 11:41:55.983851  544774 logs.go:282] 0 containers: []
	W1206 11:41:55.983860  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:41:55.983866  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:41:55.983926  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:41:56.018722  544774 cri.go:89] found id: ""
	I1206 11:41:56.018746  544774 logs.go:282] 0 containers: []
	W1206 11:41:56.018755  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:41:56.018762  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:41:56.018827  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:41:56.047423  544774 cri.go:89] found id: ""
	I1206 11:41:56.047447  544774 logs.go:282] 0 containers: []
	W1206 11:41:56.047456  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:41:56.047465  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:41:56.047477  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:41:56.119684  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:41:56.119722  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:41:56.138911  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:41:56.138960  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:41:56.207333  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:41:56.207353  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:41:56.207366  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:41:56.242290  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:41:56.242320  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:41:58.776504  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:41:58.787585  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:41:58.787659  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:41:58.816289  544774 cri.go:89] found id: ""
	I1206 11:41:58.816313  544774 logs.go:282] 0 containers: []
	W1206 11:41:58.816322  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:41:58.816329  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:41:58.816397  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:41:58.841809  544774 cri.go:89] found id: ""
	I1206 11:41:58.841834  544774 logs.go:282] 0 containers: []
	W1206 11:41:58.841843  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:41:58.841849  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:41:58.841909  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:41:58.874399  544774 cri.go:89] found id: ""
	I1206 11:41:58.874425  544774 logs.go:282] 0 containers: []
	W1206 11:41:58.874434  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:41:58.874457  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:41:58.874524  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:41:58.900682  544774 cri.go:89] found id: ""
	I1206 11:41:58.900708  544774 logs.go:282] 0 containers: []
	W1206 11:41:58.900717  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:41:58.900724  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:41:58.900783  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:41:58.930149  544774 cri.go:89] found id: ""
	I1206 11:41:58.930174  544774 logs.go:282] 0 containers: []
	W1206 11:41:58.930183  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:41:58.930189  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:41:58.930275  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:41:58.962016  544774 cri.go:89] found id: ""
	I1206 11:41:58.962042  544774 logs.go:282] 0 containers: []
	W1206 11:41:58.962051  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:41:58.962058  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:41:58.962128  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:41:58.988404  544774 cri.go:89] found id: ""
	I1206 11:41:58.988431  544774 logs.go:282] 0 containers: []
	W1206 11:41:58.988440  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:41:58.988446  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:41:58.988508  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:41:59.016900  544774 cri.go:89] found id: ""
	I1206 11:41:59.016926  544774 logs.go:282] 0 containers: []
	W1206 11:41:59.016935  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:41:59.016948  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:41:59.016962  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:41:59.083860  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:41:59.083877  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:41:59.083889  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:41:59.115017  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:41:59.115051  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:41:59.143882  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:41:59.143912  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:41:59.215709  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:41:59.215747  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:42:01.735543  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:42:01.747290  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:42:01.747386  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:42:01.783206  544774 cri.go:89] found id: ""
	I1206 11:42:01.783229  544774 logs.go:282] 0 containers: []
	W1206 11:42:01.783238  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:42:01.783245  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:42:01.783310  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:42:01.813079  544774 cri.go:89] found id: ""
	I1206 11:42:01.813103  544774 logs.go:282] 0 containers: []
	W1206 11:42:01.813112  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:42:01.813118  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:42:01.813177  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:42:01.840127  544774 cri.go:89] found id: ""
	I1206 11:42:01.840153  544774 logs.go:282] 0 containers: []
	W1206 11:42:01.840162  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:42:01.840168  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:42:01.840227  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:42:01.865893  544774 cri.go:89] found id: ""
	I1206 11:42:01.865918  544774 logs.go:282] 0 containers: []
	W1206 11:42:01.865927  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:42:01.865934  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:42:01.865990  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:42:01.892083  544774 cri.go:89] found id: ""
	I1206 11:42:01.892106  544774 logs.go:282] 0 containers: []
	W1206 11:42:01.892114  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:42:01.892120  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:42:01.892181  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:42:01.919869  544774 cri.go:89] found id: ""
	I1206 11:42:01.919894  544774 logs.go:282] 0 containers: []
	W1206 11:42:01.919903  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:42:01.919910  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:42:01.919976  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:42:01.951333  544774 cri.go:89] found id: ""
	I1206 11:42:01.951355  544774 logs.go:282] 0 containers: []
	W1206 11:42:01.951364  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:42:01.951370  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:42:01.951466  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:42:01.978753  544774 cri.go:89] found id: ""
	I1206 11:42:01.978775  544774 logs.go:282] 0 containers: []
	W1206 11:42:01.978783  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:42:01.978792  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:42:01.978804  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:42:02.047464  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:42:02.047512  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:42:02.063788  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:42:02.063819  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:42:02.129390  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:42:02.129413  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:42:02.129426  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:42:02.160753  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:42:02.160786  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:42:04.696245  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:42:04.707284  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:42:04.707410  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:42:04.758168  544774 cri.go:89] found id: ""
	I1206 11:42:04.758191  544774 logs.go:282] 0 containers: []
	W1206 11:42:04.758200  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:42:04.758206  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:42:04.758267  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:42:04.793713  544774 cri.go:89] found id: ""
	I1206 11:42:04.793736  544774 logs.go:282] 0 containers: []
	W1206 11:42:04.793743  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:42:04.793749  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:42:04.793828  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:42:04.827054  544774 cri.go:89] found id: ""
	I1206 11:42:04.827080  544774 logs.go:282] 0 containers: []
	W1206 11:42:04.827089  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:42:04.827106  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:42:04.827165  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:42:04.853659  544774 cri.go:89] found id: ""
	I1206 11:42:04.853684  544774 logs.go:282] 0 containers: []
	W1206 11:42:04.853693  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:42:04.853699  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:42:04.853756  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:42:04.878816  544774 cri.go:89] found id: ""
	I1206 11:42:04.878843  544774 logs.go:282] 0 containers: []
	W1206 11:42:04.878852  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:42:04.878871  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:42:04.878953  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:42:04.904657  544774 cri.go:89] found id: ""
	I1206 11:42:04.904683  544774 logs.go:282] 0 containers: []
	W1206 11:42:04.904700  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:42:04.904707  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:42:04.904768  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:42:04.930090  544774 cri.go:89] found id: ""
	I1206 11:42:04.930118  544774 logs.go:282] 0 containers: []
	W1206 11:42:04.930127  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:42:04.930134  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:42:04.930206  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:42:04.962902  544774 cri.go:89] found id: ""
	I1206 11:42:04.962937  544774 logs.go:282] 0 containers: []
	W1206 11:42:04.962945  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:42:04.962955  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:42:04.962966  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:42:05.030926  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:42:05.030965  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:42:05.048706  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:42:05.048794  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:42:05.122199  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:42:05.122223  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:42:05.122246  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:42:05.153481  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:42:05.153516  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:42:07.687717  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:42:07.699698  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:42:07.699770  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:42:07.735572  544774 cri.go:89] found id: ""
	I1206 11:42:07.735597  544774 logs.go:282] 0 containers: []
	W1206 11:42:07.735606  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:42:07.735612  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:42:07.735671  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:42:07.766674  544774 cri.go:89] found id: ""
	I1206 11:42:07.766700  544774 logs.go:282] 0 containers: []
	W1206 11:42:07.766709  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:42:07.766715  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:42:07.766776  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:42:07.795365  544774 cri.go:89] found id: ""
	I1206 11:42:07.795425  544774 logs.go:282] 0 containers: []
	W1206 11:42:07.795434  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:42:07.795441  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:42:07.795506  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:42:07.825639  544774 cri.go:89] found id: ""
	I1206 11:42:07.825666  544774 logs.go:282] 0 containers: []
	W1206 11:42:07.825675  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:42:07.825682  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:42:07.825770  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:42:07.852404  544774 cri.go:89] found id: ""
	I1206 11:42:07.852429  544774 logs.go:282] 0 containers: []
	W1206 11:42:07.852438  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:42:07.852445  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:42:07.852526  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:42:07.882119  544774 cri.go:89] found id: ""
	I1206 11:42:07.882148  544774 logs.go:282] 0 containers: []
	W1206 11:42:07.882157  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:42:07.882164  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:42:07.882241  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:42:07.908884  544774 cri.go:89] found id: ""
	I1206 11:42:07.908910  544774 logs.go:282] 0 containers: []
	W1206 11:42:07.908919  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:42:07.908926  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:42:07.908987  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:42:07.935977  544774 cri.go:89] found id: ""
	I1206 11:42:07.936002  544774 logs.go:282] 0 containers: []
	W1206 11:42:07.936011  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:42:07.936020  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:42:07.936033  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:42:08.014353  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:42:08.014411  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:42:08.034117  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:42:08.034179  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:42:08.115717  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:42:08.115742  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:42:08.115758  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:42:08.146974  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:42:08.147025  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:42:10.679502  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:42:10.689946  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:42:10.690013  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:42:10.728554  544774 cri.go:89] found id: ""
	I1206 11:42:10.728577  544774 logs.go:282] 0 containers: []
	W1206 11:42:10.728586  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:42:10.728592  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:42:10.728653  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:42:10.809494  544774 cri.go:89] found id: ""
	I1206 11:42:10.809517  544774 logs.go:282] 0 containers: []
	W1206 11:42:10.809525  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:42:10.809532  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:42:10.809589  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:42:10.854934  544774 cri.go:89] found id: ""
	I1206 11:42:10.854957  544774 logs.go:282] 0 containers: []
	W1206 11:42:10.854966  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:42:10.854972  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:42:10.855037  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:42:10.886597  544774 cri.go:89] found id: ""
	I1206 11:42:10.886628  544774 logs.go:282] 0 containers: []
	W1206 11:42:10.886639  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:42:10.886645  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:42:10.886711  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:42:10.933590  544774 cri.go:89] found id: ""
	I1206 11:42:10.933617  544774 logs.go:282] 0 containers: []
	W1206 11:42:10.933631  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:42:10.933638  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:42:10.933699  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:42:10.966553  544774 cri.go:89] found id: ""
	I1206 11:42:10.966581  544774 logs.go:282] 0 containers: []
	W1206 11:42:10.966590  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:42:10.966598  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:42:10.966662  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:42:11.020387  544774 cri.go:89] found id: ""
	I1206 11:42:11.020416  544774 logs.go:282] 0 containers: []
	W1206 11:42:11.020426  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:42:11.020433  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:42:11.020497  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:42:11.057711  544774 cri.go:89] found id: ""
	I1206 11:42:11.057738  544774 logs.go:282] 0 containers: []
	W1206 11:42:11.057747  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:42:11.057757  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:42:11.057769  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:42:11.103921  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:42:11.103957  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:42:11.181619  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:42:11.181657  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:42:11.204012  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:42:11.204042  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:42:11.274272  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:42:11.274294  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:42:11.274306  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:42:13.806812  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:42:13.820222  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:42:13.820295  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:42:13.857541  544774 cri.go:89] found id: ""
	I1206 11:42:13.857567  544774 logs.go:282] 0 containers: []
	W1206 11:42:13.857576  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:42:13.857582  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:42:13.857644  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:42:13.902367  544774 cri.go:89] found id: ""
	I1206 11:42:13.902392  544774 logs.go:282] 0 containers: []
	W1206 11:42:13.902401  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:42:13.902407  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:42:13.902468  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:42:13.944085  544774 cri.go:89] found id: ""
	I1206 11:42:13.944109  544774 logs.go:282] 0 containers: []
	W1206 11:42:13.944118  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:42:13.944124  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:42:13.944181  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:42:13.979409  544774 cri.go:89] found id: ""
	I1206 11:42:13.979436  544774 logs.go:282] 0 containers: []
	W1206 11:42:13.979445  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:42:13.979452  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:42:13.979514  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:42:14.021359  544774 cri.go:89] found id: ""
	I1206 11:42:14.021388  544774 logs.go:282] 0 containers: []
	W1206 11:42:14.021398  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:42:14.021404  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:42:14.021467  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:42:14.073662  544774 cri.go:89] found id: ""
	I1206 11:42:14.073688  544774 logs.go:282] 0 containers: []
	W1206 11:42:14.073697  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:42:14.073707  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:42:14.073767  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:42:14.107698  544774 cri.go:89] found id: ""
	I1206 11:42:14.107726  544774 logs.go:282] 0 containers: []
	W1206 11:42:14.107735  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:42:14.107741  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:42:14.107803  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:42:14.144989  544774 cri.go:89] found id: ""
	I1206 11:42:14.145017  544774 logs.go:282] 0 containers: []
	W1206 11:42:14.145026  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:42:14.145035  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:42:14.145047  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:42:14.232606  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:42:14.232627  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:42:14.232641  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:42:14.276954  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:42:14.276991  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:42:14.325154  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:42:14.325180  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:42:14.409565  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:42:14.409605  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:42:16.927793  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:42:16.944634  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:42:16.944706  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:42:16.971337  544774 cri.go:89] found id: ""
	I1206 11:42:16.971359  544774 logs.go:282] 0 containers: []
	W1206 11:42:16.971367  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:42:16.971396  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:42:16.971459  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:42:16.997381  544774 cri.go:89] found id: ""
	I1206 11:42:16.997406  544774 logs.go:282] 0 containers: []
	W1206 11:42:16.997415  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:42:16.997422  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:42:16.997504  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:42:17.029229  544774 cri.go:89] found id: ""
	I1206 11:42:17.029256  544774 logs.go:282] 0 containers: []
	W1206 11:42:17.029265  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:42:17.029272  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:42:17.029331  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:42:17.055745  544774 cri.go:89] found id: ""
	I1206 11:42:17.055769  544774 logs.go:282] 0 containers: []
	W1206 11:42:17.055778  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:42:17.055784  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:42:17.055844  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:42:17.081589  544774 cri.go:89] found id: ""
	I1206 11:42:17.081615  544774 logs.go:282] 0 containers: []
	W1206 11:42:17.081623  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:42:17.081630  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:42:17.081692  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:42:17.108503  544774 cri.go:89] found id: ""
	I1206 11:42:17.108535  544774 logs.go:282] 0 containers: []
	W1206 11:42:17.108544  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:42:17.108550  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:42:17.108655  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:42:17.135185  544774 cri.go:89] found id: ""
	I1206 11:42:17.135220  544774 logs.go:282] 0 containers: []
	W1206 11:42:17.135229  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:42:17.135251  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:42:17.135337  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:42:17.161608  544774 cri.go:89] found id: ""
	I1206 11:42:17.161636  544774 logs.go:282] 0 containers: []
	W1206 11:42:17.161645  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:42:17.161654  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:42:17.161666  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:42:17.252082  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:42:17.252102  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:42:17.252114  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:42:17.291849  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:42:17.291882  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:42:17.361330  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:42:17.361354  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:42:17.440902  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:42:17.440941  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:42:19.959552  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:42:19.970042  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:42:19.970116  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:42:19.998195  544774 cri.go:89] found id: ""
	I1206 11:42:19.998222  544774 logs.go:282] 0 containers: []
	W1206 11:42:19.998230  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:42:19.998237  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:42:19.998299  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:42:20.033826  544774 cri.go:89] found id: ""
	I1206 11:42:20.033859  544774 logs.go:282] 0 containers: []
	W1206 11:42:20.033869  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:42:20.033876  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:42:20.033944  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:42:20.065676  544774 cri.go:89] found id: ""
	I1206 11:42:20.065709  544774 logs.go:282] 0 containers: []
	W1206 11:42:20.065718  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:42:20.065724  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:42:20.065798  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:42:20.092898  544774 cri.go:89] found id: ""
	I1206 11:42:20.092928  544774 logs.go:282] 0 containers: []
	W1206 11:42:20.092937  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:42:20.092944  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:42:20.093006  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:42:20.121167  544774 cri.go:89] found id: ""
	I1206 11:42:20.121192  544774 logs.go:282] 0 containers: []
	W1206 11:42:20.121202  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:42:20.121208  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:42:20.121271  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:42:20.148089  544774 cri.go:89] found id: ""
	I1206 11:42:20.148121  544774 logs.go:282] 0 containers: []
	W1206 11:42:20.148130  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:42:20.148137  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:42:20.148197  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:42:20.175068  544774 cri.go:89] found id: ""
	I1206 11:42:20.175108  544774 logs.go:282] 0 containers: []
	W1206 11:42:20.175124  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:42:20.175131  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:42:20.175204  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:42:20.202512  544774 cri.go:89] found id: ""
	I1206 11:42:20.202539  544774 logs.go:282] 0 containers: []
	W1206 11:42:20.202548  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:42:20.202557  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:42:20.202599  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:42:20.232120  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:42:20.232147  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:42:20.301198  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:42:20.301239  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:42:20.317405  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:42:20.317431  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:42:20.380240  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:42:20.380261  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:42:20.380273  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:42:22.913290  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:42:22.923765  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:42:22.923840  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:42:22.957215  544774 cri.go:89] found id: ""
	I1206 11:42:22.957239  544774 logs.go:282] 0 containers: []
	W1206 11:42:22.957248  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:42:22.957255  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:42:22.957312  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:42:22.984596  544774 cri.go:89] found id: ""
	I1206 11:42:22.984623  544774 logs.go:282] 0 containers: []
	W1206 11:42:22.984631  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:42:22.984638  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:42:22.984697  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:42:23.012965  544774 cri.go:89] found id: ""
	I1206 11:42:23.012994  544774 logs.go:282] 0 containers: []
	W1206 11:42:23.013004  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:42:23.013011  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:42:23.013073  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:42:23.042003  544774 cri.go:89] found id: ""
	I1206 11:42:23.042033  544774 logs.go:282] 0 containers: []
	W1206 11:42:23.042042  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:42:23.042048  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:42:23.042106  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:42:23.069203  544774 cri.go:89] found id: ""
	I1206 11:42:23.069229  544774 logs.go:282] 0 containers: []
	W1206 11:42:23.069238  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:42:23.069245  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:42:23.069306  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:42:23.098110  544774 cri.go:89] found id: ""
	I1206 11:42:23.098136  544774 logs.go:282] 0 containers: []
	W1206 11:42:23.098145  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:42:23.098152  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:42:23.098222  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:42:23.124312  544774 cri.go:89] found id: ""
	I1206 11:42:23.124337  544774 logs.go:282] 0 containers: []
	W1206 11:42:23.124345  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:42:23.124351  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:42:23.124412  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:42:23.150384  544774 cri.go:89] found id: ""
	I1206 11:42:23.150409  544774 logs.go:282] 0 containers: []
	W1206 11:42:23.150417  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:42:23.150426  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:42:23.150437  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:42:23.218251  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:42:23.218289  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:42:23.235235  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:42:23.235265  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:42:23.300920  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:42:23.300941  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:42:23.300956  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:42:23.332378  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:42:23.332413  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:42:25.865704  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:42:25.875908  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:42:25.875979  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:42:25.904020  544774 cri.go:89] found id: ""
	I1206 11:42:25.904047  544774 logs.go:282] 0 containers: []
	W1206 11:42:25.904056  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:42:25.904064  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:42:25.904123  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:42:25.949210  544774 cri.go:89] found id: ""
	I1206 11:42:25.949233  544774 logs.go:282] 0 containers: []
	W1206 11:42:25.949241  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:42:25.949247  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:42:25.949303  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:42:25.987186  544774 cri.go:89] found id: ""
	I1206 11:42:25.987211  544774 logs.go:282] 0 containers: []
	W1206 11:42:25.987219  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:42:25.987225  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:42:25.987286  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:42:26.046138  544774 cri.go:89] found id: ""
	I1206 11:42:26.046164  544774 logs.go:282] 0 containers: []
	W1206 11:42:26.046173  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:42:26.046180  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:42:26.046244  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:42:26.082194  544774 cri.go:89] found id: ""
	I1206 11:42:26.082221  544774 logs.go:282] 0 containers: []
	W1206 11:42:26.082231  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:42:26.082237  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:42:26.082303  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:42:26.116199  544774 cri.go:89] found id: ""
	I1206 11:42:26.116224  544774 logs.go:282] 0 containers: []
	W1206 11:42:26.116232  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:42:26.116239  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:42:26.116300  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:42:26.145769  544774 cri.go:89] found id: ""
	I1206 11:42:26.145792  544774 logs.go:282] 0 containers: []
	W1206 11:42:26.145801  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:42:26.145807  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:42:26.145874  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:42:26.174702  544774 cri.go:89] found id: ""
	I1206 11:42:26.174724  544774 logs.go:282] 0 containers: []
	W1206 11:42:26.174732  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:42:26.174741  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:42:26.174754  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:42:26.191859  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:42:26.191939  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:42:26.259043  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:42:26.259108  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:42:26.259135  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:42:26.290275  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:42:26.290310  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:42:26.320864  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:42:26.320893  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:42:28.890647  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:42:28.901432  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:42:28.901501  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:42:28.928782  544774 cri.go:89] found id: ""
	I1206 11:42:28.928809  544774 logs.go:282] 0 containers: []
	W1206 11:42:28.928818  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:42:28.928825  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:42:28.928888  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:42:28.962229  544774 cri.go:89] found id: ""
	I1206 11:42:28.962255  544774 logs.go:282] 0 containers: []
	W1206 11:42:28.962264  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:42:28.962269  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:42:28.962330  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:42:28.987982  544774 cri.go:89] found id: ""
	I1206 11:42:28.988010  544774 logs.go:282] 0 containers: []
	W1206 11:42:28.988020  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:42:28.988026  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:42:28.988092  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:42:29.020687  544774 cri.go:89] found id: ""
	I1206 11:42:29.020712  544774 logs.go:282] 0 containers: []
	W1206 11:42:29.020720  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:42:29.020727  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:42:29.020792  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:42:29.046352  544774 cri.go:89] found id: ""
	I1206 11:42:29.046381  544774 logs.go:282] 0 containers: []
	W1206 11:42:29.046391  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:42:29.046397  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:42:29.046458  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:42:29.071308  544774 cri.go:89] found id: ""
	I1206 11:42:29.071330  544774 logs.go:282] 0 containers: []
	W1206 11:42:29.071339  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:42:29.071345  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:42:29.071460  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:42:29.100945  544774 cri.go:89] found id: ""
	I1206 11:42:29.100970  544774 logs.go:282] 0 containers: []
	W1206 11:42:29.100978  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:42:29.100984  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:42:29.101046  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:42:29.127333  544774 cri.go:89] found id: ""
	I1206 11:42:29.127359  544774 logs.go:282] 0 containers: []
	W1206 11:42:29.127368  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:42:29.127396  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:42:29.127410  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:42:29.155410  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:42:29.155443  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:42:29.223600  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:42:29.223635  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:42:29.239834  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:42:29.239865  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:42:29.305464  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:42:29.305484  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:42:29.305522  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:42:31.839026  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:42:31.849570  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:42:31.849643  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:42:31.875601  544774 cri.go:89] found id: ""
	I1206 11:42:31.875643  544774 logs.go:282] 0 containers: []
	W1206 11:42:31.875652  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:42:31.875661  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:42:31.875723  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:42:31.900977  544774 cri.go:89] found id: ""
	I1206 11:42:31.901001  544774 logs.go:282] 0 containers: []
	W1206 11:42:31.901009  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:42:31.901016  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:42:31.901075  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:42:31.925925  544774 cri.go:89] found id: ""
	I1206 11:42:31.925947  544774 logs.go:282] 0 containers: []
	W1206 11:42:31.925956  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:42:31.925962  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:42:31.926019  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:42:31.952241  544774 cri.go:89] found id: ""
	I1206 11:42:31.952267  544774 logs.go:282] 0 containers: []
	W1206 11:42:31.952276  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:42:31.952282  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:42:31.952341  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:42:31.981766  544774 cri.go:89] found id: ""
	I1206 11:42:31.981791  544774 logs.go:282] 0 containers: []
	W1206 11:42:31.981800  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:42:31.981809  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:42:31.981869  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:42:32.012914  544774 cri.go:89] found id: ""
	I1206 11:42:32.012949  544774 logs.go:282] 0 containers: []
	W1206 11:42:32.012959  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:42:32.012967  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:42:32.013035  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:42:32.040099  544774 cri.go:89] found id: ""
	I1206 11:42:32.040121  544774 logs.go:282] 0 containers: []
	W1206 11:42:32.040130  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:42:32.040137  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:42:32.040198  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:42:32.070287  544774 cri.go:89] found id: ""
	I1206 11:42:32.070369  544774 logs.go:282] 0 containers: []
	W1206 11:42:32.070405  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:42:32.070433  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:42:32.070485  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:42:32.139177  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:42:32.139215  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:42:32.155795  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:42:32.155824  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:42:32.221947  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:42:32.221969  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:42:32.221982  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:42:32.254416  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:42:32.254462  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:42:34.787577  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:42:34.797961  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:42:34.798128  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:42:34.825545  544774 cri.go:89] found id: ""
	I1206 11:42:34.825568  544774 logs.go:282] 0 containers: []
	W1206 11:42:34.825576  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:42:34.825583  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:42:34.825644  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:42:34.854795  544774 cri.go:89] found id: ""
	I1206 11:42:34.854817  544774 logs.go:282] 0 containers: []
	W1206 11:42:34.854825  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:42:34.854832  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:42:34.854899  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:42:34.880738  544774 cri.go:89] found id: ""
	I1206 11:42:34.880763  544774 logs.go:282] 0 containers: []
	W1206 11:42:34.880772  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:42:34.880778  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:42:34.880838  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:42:34.906406  544774 cri.go:89] found id: ""
	I1206 11:42:34.906433  544774 logs.go:282] 0 containers: []
	W1206 11:42:34.906441  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:42:34.906449  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:42:34.906507  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:42:34.936954  544774 cri.go:89] found id: ""
	I1206 11:42:34.936983  544774 logs.go:282] 0 containers: []
	W1206 11:42:34.936991  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:42:34.936997  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:42:34.937057  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:42:34.968536  544774 cri.go:89] found id: ""
	I1206 11:42:34.968565  544774 logs.go:282] 0 containers: []
	W1206 11:42:34.968574  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:42:34.968581  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:42:34.968640  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:42:34.999553  544774 cri.go:89] found id: ""
	I1206 11:42:34.999575  544774 logs.go:282] 0 containers: []
	W1206 11:42:34.999584  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:42:34.999590  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:42:34.999647  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:42:35.032864  544774 cri.go:89] found id: ""
	I1206 11:42:35.032890  544774 logs.go:282] 0 containers: []
	W1206 11:42:35.032899  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:42:35.032909  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:42:35.032939  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:42:35.103500  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:42:35.103523  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:42:35.103537  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:42:35.135530  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:42:35.135570  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:42:35.168727  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:42:35.168764  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:42:35.240992  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:42:35.241032  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:42:37.759482  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:42:37.770186  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:42:37.770275  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:42:37.796280  544774 cri.go:89] found id: ""
	I1206 11:42:37.796307  544774 logs.go:282] 0 containers: []
	W1206 11:42:37.796317  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:42:37.796323  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:42:37.796385  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:42:37.826694  544774 cri.go:89] found id: ""
	I1206 11:42:37.826722  544774 logs.go:282] 0 containers: []
	W1206 11:42:37.826732  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:42:37.826738  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:42:37.826806  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:42:37.855001  544774 cri.go:89] found id: ""
	I1206 11:42:37.855023  544774 logs.go:282] 0 containers: []
	W1206 11:42:37.855031  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:42:37.855037  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:42:37.855100  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:42:37.881191  544774 cri.go:89] found id: ""
	I1206 11:42:37.881215  544774 logs.go:282] 0 containers: []
	W1206 11:42:37.881224  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:42:37.881230  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:42:37.881288  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:42:37.917248  544774 cri.go:89] found id: ""
	I1206 11:42:37.917273  544774 logs.go:282] 0 containers: []
	W1206 11:42:37.917281  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:42:37.917288  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:42:37.917347  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:42:37.950048  544774 cri.go:89] found id: ""
	I1206 11:42:37.950073  544774 logs.go:282] 0 containers: []
	W1206 11:42:37.950082  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:42:37.950088  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:42:37.950148  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:42:37.976714  544774 cri.go:89] found id: ""
	I1206 11:42:37.976743  544774 logs.go:282] 0 containers: []
	W1206 11:42:37.976752  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:42:37.976758  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:42:37.976815  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:42:38.011682  544774 cri.go:89] found id: ""
	I1206 11:42:38.011716  544774 logs.go:282] 0 containers: []
	W1206 11:42:38.011726  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:42:38.011736  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:42:38.011752  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:42:38.030417  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:42:38.030447  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:42:38.103204  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:42:38.103242  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:42:38.103256  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:42:38.135216  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:42:38.135251  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:42:38.165175  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:42:38.165205  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:42:40.737668  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:42:40.749923  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:42:40.750002  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:42:40.782006  544774 cri.go:89] found id: ""
	I1206 11:42:40.782030  544774 logs.go:282] 0 containers: []
	W1206 11:42:40.782038  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:42:40.782045  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:42:40.782110  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:42:40.808552  544774 cri.go:89] found id: ""
	I1206 11:42:40.808574  544774 logs.go:282] 0 containers: []
	W1206 11:42:40.808582  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:42:40.808588  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:42:40.808648  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:42:40.835724  544774 cri.go:89] found id: ""
	I1206 11:42:40.835751  544774 logs.go:282] 0 containers: []
	W1206 11:42:40.835760  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:42:40.835767  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:42:40.835828  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:42:40.860422  544774 cri.go:89] found id: ""
	I1206 11:42:40.860445  544774 logs.go:282] 0 containers: []
	W1206 11:42:40.860454  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:42:40.860460  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:42:40.860521  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:42:40.885874  544774 cri.go:89] found id: ""
	I1206 11:42:40.885900  544774 logs.go:282] 0 containers: []
	W1206 11:42:40.885909  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:42:40.885916  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:42:40.885975  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:42:40.912493  544774 cri.go:89] found id: ""
	I1206 11:42:40.912563  544774 logs.go:282] 0 containers: []
	W1206 11:42:40.912587  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:42:40.912606  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:42:40.912693  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:42:40.949118  544774 cri.go:89] found id: ""
	I1206 11:42:40.949142  544774 logs.go:282] 0 containers: []
	W1206 11:42:40.949150  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:42:40.949157  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:42:40.949242  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:42:40.974356  544774 cri.go:89] found id: ""
	I1206 11:42:40.974422  544774 logs.go:282] 0 containers: []
	W1206 11:42:40.974445  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:42:40.974466  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:42:40.974504  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:42:41.043219  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:42:41.043254  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:42:41.060527  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:42:41.060554  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:42:41.129377  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:42:41.129400  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:42:41.129414  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:42:41.161737  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:42:41.161778  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:42:43.694048  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:42:43.704465  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:42:43.704539  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:42:43.737031  544774 cri.go:89] found id: ""
	I1206 11:42:43.737057  544774 logs.go:282] 0 containers: []
	W1206 11:42:43.737066  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:42:43.737072  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:42:43.737134  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:42:43.768522  544774 cri.go:89] found id: ""
	I1206 11:42:43.768550  544774 logs.go:282] 0 containers: []
	W1206 11:42:43.768558  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:42:43.768565  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:42:43.768621  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:42:43.796197  544774 cri.go:89] found id: ""
	I1206 11:42:43.796224  544774 logs.go:282] 0 containers: []
	W1206 11:42:43.796233  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:42:43.796239  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:42:43.796305  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:42:43.826245  544774 cri.go:89] found id: ""
	I1206 11:42:43.826274  544774 logs.go:282] 0 containers: []
	W1206 11:42:43.826282  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:42:43.826288  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:42:43.826347  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:42:43.852238  544774 cri.go:89] found id: ""
	I1206 11:42:43.852277  544774 logs.go:282] 0 containers: []
	W1206 11:42:43.852287  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:42:43.852293  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:42:43.852354  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:42:43.885294  544774 cri.go:89] found id: ""
	I1206 11:42:43.885318  544774 logs.go:282] 0 containers: []
	W1206 11:42:43.885327  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:42:43.885333  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:42:43.885399  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:42:43.914110  544774 cri.go:89] found id: ""
	I1206 11:42:43.914131  544774 logs.go:282] 0 containers: []
	W1206 11:42:43.914140  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:42:43.914147  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:42:43.914205  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:42:43.945996  544774 cri.go:89] found id: ""
	I1206 11:42:43.946024  544774 logs.go:282] 0 containers: []
	W1206 11:42:43.946033  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:42:43.946042  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:42:43.946072  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:42:43.962572  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:42:43.962645  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:42:44.035600  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:42:44.035666  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:42:44.035696  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:42:44.066646  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:42:44.066677  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:42:44.095234  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:42:44.095264  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:42:46.666306  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:42:46.676637  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:42:46.676708  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:42:46.702998  544774 cri.go:89] found id: ""
	I1206 11:42:46.703026  544774 logs.go:282] 0 containers: []
	W1206 11:42:46.703035  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:42:46.703041  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:42:46.703108  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:42:46.745497  544774 cri.go:89] found id: ""
	I1206 11:42:46.745520  544774 logs.go:282] 0 containers: []
	W1206 11:42:46.745529  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:42:46.745534  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:42:46.745595  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:42:46.779855  544774 cri.go:89] found id: ""
	I1206 11:42:46.779882  544774 logs.go:282] 0 containers: []
	W1206 11:42:46.779890  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:42:46.779897  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:42:46.779964  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:42:46.806730  544774 cri.go:89] found id: ""
	I1206 11:42:46.806756  544774 logs.go:282] 0 containers: []
	W1206 11:42:46.806765  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:42:46.806771  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:42:46.806844  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:42:46.832714  544774 cri.go:89] found id: ""
	I1206 11:42:46.832741  544774 logs.go:282] 0 containers: []
	W1206 11:42:46.832756  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:42:46.832762  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:42:46.832821  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:42:46.858392  544774 cri.go:89] found id: ""
	I1206 11:42:46.858420  544774 logs.go:282] 0 containers: []
	W1206 11:42:46.858429  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:42:46.858436  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:42:46.858498  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:42:46.884343  544774 cri.go:89] found id: ""
	I1206 11:42:46.884375  544774 logs.go:282] 0 containers: []
	W1206 11:42:46.884384  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:42:46.884390  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:42:46.884455  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:42:46.912766  544774 cri.go:89] found id: ""
	I1206 11:42:46.912792  544774 logs.go:282] 0 containers: []
	W1206 11:42:46.912801  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:42:46.912811  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:42:46.912823  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:42:46.929487  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:42:46.929516  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:42:47.004505  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:42:47.004524  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:42:47.004538  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:42:47.038211  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:42:47.038247  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:42:47.067067  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:42:47.067101  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:42:49.639253  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:42:49.649778  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:42:49.649853  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:42:49.676871  544774 cri.go:89] found id: ""
	I1206 11:42:49.676896  544774 logs.go:282] 0 containers: []
	W1206 11:42:49.676905  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:42:49.676912  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:42:49.676975  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:42:49.704002  544774 cri.go:89] found id: ""
	I1206 11:42:49.704027  544774 logs.go:282] 0 containers: []
	W1206 11:42:49.704036  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:42:49.704042  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:42:49.704101  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:42:49.750735  544774 cri.go:89] found id: ""
	I1206 11:42:49.750760  544774 logs.go:282] 0 containers: []
	W1206 11:42:49.750769  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:42:49.750776  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:42:49.750849  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:42:49.788163  544774 cri.go:89] found id: ""
	I1206 11:42:49.788192  544774 logs.go:282] 0 containers: []
	W1206 11:42:49.788202  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:42:49.788209  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:42:49.788270  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:42:49.820857  544774 cri.go:89] found id: ""
	I1206 11:42:49.820886  544774 logs.go:282] 0 containers: []
	W1206 11:42:49.820895  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:42:49.820901  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:42:49.820960  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:42:49.846190  544774 cri.go:89] found id: ""
	I1206 11:42:49.846218  544774 logs.go:282] 0 containers: []
	W1206 11:42:49.846228  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:42:49.846235  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:42:49.846297  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:42:49.871264  544774 cri.go:89] found id: ""
	I1206 11:42:49.871291  544774 logs.go:282] 0 containers: []
	W1206 11:42:49.871301  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:42:49.871307  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:42:49.871369  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:42:49.897371  544774 cri.go:89] found id: ""
	I1206 11:42:49.897399  544774 logs.go:282] 0 containers: []
	W1206 11:42:49.897419  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:42:49.897429  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:42:49.897442  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:42:49.914717  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:42:49.914743  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:42:49.986123  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:42:49.986144  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:42:49.986158  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:42:50.018819  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:42:50.018870  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:42:50.055145  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:42:50.055178  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:42:52.629820  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:42:52.640135  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:42:52.640209  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:42:52.665325  544774 cri.go:89] found id: ""
	I1206 11:42:52.665350  544774 logs.go:282] 0 containers: []
	W1206 11:42:52.665359  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:42:52.665365  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:42:52.665424  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:42:52.690428  544774 cri.go:89] found id: ""
	I1206 11:42:52.690456  544774 logs.go:282] 0 containers: []
	W1206 11:42:52.690465  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:42:52.690470  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:42:52.690534  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:42:52.716311  544774 cri.go:89] found id: ""
	I1206 11:42:52.716338  544774 logs.go:282] 0 containers: []
	W1206 11:42:52.716347  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:42:52.716353  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:42:52.716412  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:42:52.753629  544774 cri.go:89] found id: ""
	I1206 11:42:52.753657  544774 logs.go:282] 0 containers: []
	W1206 11:42:52.753666  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:42:52.753673  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:42:52.753732  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:42:52.784806  544774 cri.go:89] found id: ""
	I1206 11:42:52.784828  544774 logs.go:282] 0 containers: []
	W1206 11:42:52.784837  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:42:52.784847  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:42:52.784909  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:42:52.811318  544774 cri.go:89] found id: ""
	I1206 11:42:52.811343  544774 logs.go:282] 0 containers: []
	W1206 11:42:52.811352  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:42:52.811358  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:42:52.811468  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:42:52.836950  544774 cri.go:89] found id: ""
	I1206 11:42:52.836977  544774 logs.go:282] 0 containers: []
	W1206 11:42:52.836986  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:42:52.836992  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:42:52.837056  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:42:52.861313  544774 cri.go:89] found id: ""
	I1206 11:42:52.861389  544774 logs.go:282] 0 containers: []
	W1206 11:42:52.861404  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:42:52.861415  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:42:52.861426  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:42:52.890567  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:42:52.890596  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:42:52.958128  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:42:52.958172  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:42:52.974852  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:42:52.974883  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:42:53.044960  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:42:53.044983  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:42:53.044997  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:42:55.577622  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:42:55.588484  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:42:55.588557  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:42:55.624367  544774 cri.go:89] found id: ""
	I1206 11:42:55.624389  544774 logs.go:282] 0 containers: []
	W1206 11:42:55.624397  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:42:55.624403  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:42:55.624465  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:42:55.650204  544774 cri.go:89] found id: ""
	I1206 11:42:55.650227  544774 logs.go:282] 0 containers: []
	W1206 11:42:55.650236  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:42:55.650242  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:42:55.650302  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:42:55.675770  544774 cri.go:89] found id: ""
	I1206 11:42:55.675791  544774 logs.go:282] 0 containers: []
	W1206 11:42:55.675799  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:42:55.675806  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:42:55.675864  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:42:55.703753  544774 cri.go:89] found id: ""
	I1206 11:42:55.703779  544774 logs.go:282] 0 containers: []
	W1206 11:42:55.703788  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:42:55.703795  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:42:55.703857  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:42:55.734165  544774 cri.go:89] found id: ""
	I1206 11:42:55.734192  544774 logs.go:282] 0 containers: []
	W1206 11:42:55.734201  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:42:55.734208  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:42:55.734330  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:42:55.776864  544774 cri.go:89] found id: ""
	I1206 11:42:55.776893  544774 logs.go:282] 0 containers: []
	W1206 11:42:55.776902  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:42:55.776909  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:42:55.776967  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:42:55.805092  544774 cri.go:89] found id: ""
	I1206 11:42:55.805115  544774 logs.go:282] 0 containers: []
	W1206 11:42:55.805123  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:42:55.805130  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:42:55.805191  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:42:55.831544  544774 cri.go:89] found id: ""
	I1206 11:42:55.831641  544774 logs.go:282] 0 containers: []
	W1206 11:42:55.831669  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:42:55.831696  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:42:55.831721  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:42:55.903165  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:42:55.903200  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:42:55.919990  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:42:55.920019  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:42:55.990582  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:42:55.990605  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:42:55.990618  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:42:56.024756  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:42:56.024791  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:42:58.557459  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:42:58.569002  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:42:58.569066  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:42:58.616571  544774 cri.go:89] found id: ""
	I1206 11:42:58.616595  544774 logs.go:282] 0 containers: []
	W1206 11:42:58.616603  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:42:58.616609  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:42:58.616665  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:42:58.662382  544774 cri.go:89] found id: ""
	I1206 11:42:58.662405  544774 logs.go:282] 0 containers: []
	W1206 11:42:58.662414  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:42:58.662419  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:42:58.662489  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:42:58.698166  544774 cri.go:89] found id: ""
	I1206 11:42:58.698188  544774 logs.go:282] 0 containers: []
	W1206 11:42:58.698196  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:42:58.698202  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:42:58.698262  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:42:58.747467  544774 cri.go:89] found id: ""
	I1206 11:42:58.747488  544774 logs.go:282] 0 containers: []
	W1206 11:42:58.747503  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:42:58.747510  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:42:58.747568  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:42:58.784825  544774 cri.go:89] found id: ""
	I1206 11:42:58.784851  544774 logs.go:282] 0 containers: []
	W1206 11:42:58.784860  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:42:58.784866  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:42:58.784928  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:42:58.840511  544774 cri.go:89] found id: ""
	I1206 11:42:58.840538  544774 logs.go:282] 0 containers: []
	W1206 11:42:58.840547  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:42:58.840553  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:42:58.840613  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:42:58.881379  544774 cri.go:89] found id: ""
	I1206 11:42:58.881406  544774 logs.go:282] 0 containers: []
	W1206 11:42:58.881414  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:42:58.881421  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:42:58.881479  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:42:58.909555  544774 cri.go:89] found id: ""
	I1206 11:42:58.909581  544774 logs.go:282] 0 containers: []
	W1206 11:42:58.909591  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:42:58.909599  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:42:58.909611  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:42:58.945127  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:42:58.945164  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:42:58.974077  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:42:58.974103  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:42:59.045701  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:42:59.045741  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:42:59.061959  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:42:59.061988  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:42:59.129836  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:43:01.631507  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:43:01.642332  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:43:01.642403  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:43:01.674663  544774 cri.go:89] found id: ""
	I1206 11:43:01.674740  544774 logs.go:282] 0 containers: []
	W1206 11:43:01.674786  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:43:01.674814  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:43:01.674907  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:43:01.703181  544774 cri.go:89] found id: ""
	I1206 11:43:01.703208  544774 logs.go:282] 0 containers: []
	W1206 11:43:01.703217  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:43:01.703223  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:43:01.703280  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:43:01.738595  544774 cri.go:89] found id: ""
	I1206 11:43:01.738617  544774 logs.go:282] 0 containers: []
	W1206 11:43:01.738625  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:43:01.738631  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:43:01.738688  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:43:01.789465  544774 cri.go:89] found id: ""
	I1206 11:43:01.789486  544774 logs.go:282] 0 containers: []
	W1206 11:43:01.789495  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:43:01.789501  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:43:01.789564  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:43:01.821186  544774 cri.go:89] found id: ""
	I1206 11:43:01.821207  544774 logs.go:282] 0 containers: []
	W1206 11:43:01.821215  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:43:01.821221  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:43:01.821278  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:43:01.855644  544774 cri.go:89] found id: ""
	I1206 11:43:01.855716  544774 logs.go:282] 0 containers: []
	W1206 11:43:01.855738  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:43:01.855759  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:43:01.855839  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:43:01.890944  544774 cri.go:89] found id: ""
	I1206 11:43:01.891017  544774 logs.go:282] 0 containers: []
	W1206 11:43:01.891041  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:43:01.891059  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:43:01.891140  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:43:01.921737  544774 cri.go:89] found id: ""
	I1206 11:43:01.921808  544774 logs.go:282] 0 containers: []
	W1206 11:43:01.921831  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:43:01.921853  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:43:01.921885  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:43:02.021771  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:43:02.021816  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:43:02.057185  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:43:02.057209  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:43:02.155861  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:43:02.155929  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:43:02.155956  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:43:02.203026  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:43:02.203097  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:43:04.739538  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:43:04.750098  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:43:04.750171  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:43:04.777118  544774 cri.go:89] found id: ""
	I1206 11:43:04.777140  544774 logs.go:282] 0 containers: []
	W1206 11:43:04.777148  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:43:04.777154  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:43:04.777214  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:43:04.803114  544774 cri.go:89] found id: ""
	I1206 11:43:04.803137  544774 logs.go:282] 0 containers: []
	W1206 11:43:04.803146  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:43:04.803152  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:43:04.803211  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:43:04.829582  544774 cri.go:89] found id: ""
	I1206 11:43:04.829611  544774 logs.go:282] 0 containers: []
	W1206 11:43:04.829621  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:43:04.829627  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:43:04.829690  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:43:04.863296  544774 cri.go:89] found id: ""
	I1206 11:43:04.863348  544774 logs.go:282] 0 containers: []
	W1206 11:43:04.863357  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:43:04.863363  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:43:04.863506  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:43:04.896561  544774 cri.go:89] found id: ""
	I1206 11:43:04.896588  544774 logs.go:282] 0 containers: []
	W1206 11:43:04.896597  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:43:04.896603  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:43:04.896668  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:43:04.927175  544774 cri.go:89] found id: ""
	I1206 11:43:04.927203  544774 logs.go:282] 0 containers: []
	W1206 11:43:04.927212  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:43:04.927218  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:43:04.927276  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:43:04.979857  544774 cri.go:89] found id: ""
	I1206 11:43:04.979883  544774 logs.go:282] 0 containers: []
	W1206 11:43:04.979892  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:43:04.979898  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:43:04.979959  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:43:05.031178  544774 cri.go:89] found id: ""
	I1206 11:43:05.031205  544774 logs.go:282] 0 containers: []
	W1206 11:43:05.031215  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:43:05.031223  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:43:05.031234  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:43:05.131678  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:43:05.131713  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:43:05.148503  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:43:05.148535  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:43:05.235807  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:43:05.235829  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:43:05.235842  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:43:05.270759  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:43:05.270800  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1206 11:43:07.805140  544774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:43:07.815612  544774 kubeadm.go:602] duration metric: took 4m5.208117157s to restartPrimaryControlPlane
	W1206 11:43:07.815683  544774 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1206 11:43:07.815744  544774 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1206 11:43:08.224195  544774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 11:43:08.237638  544774 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 11:43:08.246079  544774 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 11:43:08.246148  544774 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 11:43:08.254613  544774 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 11:43:08.254642  544774 kubeadm.go:158] found existing configuration files:
	
	I1206 11:43:08.254699  544774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 11:43:08.262915  544774 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 11:43:08.263010  544774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 11:43:08.270780  544774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 11:43:08.278817  544774 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 11:43:08.278883  544774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 11:43:08.286519  544774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 11:43:08.294562  544774 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 11:43:08.294639  544774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 11:43:08.302478  544774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 11:43:08.310947  544774 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 11:43:08.311023  544774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 11:43:08.319782  544774 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 11:43:08.450371  544774 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 11:43:08.450860  544774 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 11:43:08.537060  544774 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 11:47:10.302823  544774 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 11:47:10.302865  544774 kubeadm.go:319] 
	I1206 11:47:10.302959  544774 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 11:47:10.307830  544774 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 11:47:10.307890  544774 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 11:47:10.307983  544774 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 11:47:10.308043  544774 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 11:47:10.308084  544774 kubeadm.go:319] OS: Linux
	I1206 11:47:10.308132  544774 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 11:47:10.308185  544774 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 11:47:10.308235  544774 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 11:47:10.308287  544774 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 11:47:10.308340  544774 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 11:47:10.308392  544774 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 11:47:10.308442  544774 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 11:47:10.308494  544774 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 11:47:10.308545  544774 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 11:47:10.308622  544774 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 11:47:10.308721  544774 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 11:47:10.308819  544774 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 11:47:10.308885  544774 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 11:47:10.311989  544774 out.go:252]   - Generating certificates and keys ...
	I1206 11:47:10.312084  544774 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 11:47:10.312153  544774 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 11:47:10.312243  544774 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 11:47:10.312308  544774 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 11:47:10.312379  544774 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 11:47:10.312437  544774 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 11:47:10.312500  544774 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 11:47:10.312562  544774 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 11:47:10.312636  544774 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 11:47:10.312707  544774 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 11:47:10.312748  544774 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 11:47:10.312806  544774 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 11:47:10.312858  544774 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 11:47:10.312915  544774 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 11:47:10.312969  544774 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 11:47:10.313033  544774 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 11:47:10.313089  544774 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 11:47:10.313173  544774 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 11:47:10.313239  544774 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 11:47:10.316166  544774 out.go:252]   - Booting up control plane ...
	I1206 11:47:10.316290  544774 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 11:47:10.316438  544774 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 11:47:10.316532  544774 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 11:47:10.316677  544774 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 11:47:10.316775  544774 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 11:47:10.316876  544774 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 11:47:10.316991  544774 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 11:47:10.317034  544774 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 11:47:10.317179  544774 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 11:47:10.317295  544774 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 11:47:10.317371  544774 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000236884s
	I1206 11:47:10.317381  544774 kubeadm.go:319] 
	I1206 11:47:10.317438  544774 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 11:47:10.317474  544774 kubeadm.go:319] 	- The kubelet is not running
	I1206 11:47:10.317581  544774 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 11:47:10.317590  544774 kubeadm.go:319] 
	I1206 11:47:10.317694  544774 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 11:47:10.317729  544774 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 11:47:10.317766  544774 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	W1206 11:47:10.317910  544774 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000236884s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000236884s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1206 11:47:10.317996  544774 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1206 11:47:10.318162  544774 kubeadm.go:319] 
	I1206 11:47:10.740250  544774 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 11:47:10.763968  544774 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 11:47:10.764035  544774 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 11:47:10.776744  544774 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 11:47:10.776764  544774 kubeadm.go:158] found existing configuration files:
	
	I1206 11:47:10.776823  544774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 11:47:10.787703  544774 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 11:47:10.787771  544774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 11:47:10.799262  544774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 11:47:10.808778  544774 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 11:47:10.808851  544774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 11:47:10.817363  544774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 11:47:10.829945  544774 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 11:47:10.830017  544774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 11:47:10.839168  544774 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 11:47:10.852189  544774 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 11:47:10.852256  544774 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 11:47:10.860906  544774 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 11:47:10.911771  544774 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1206 11:47:10.912200  544774 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 11:47:11.018740  544774 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 11:47:11.018883  544774 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 11:47:11.018942  544774 kubeadm.go:319] OS: Linux
	I1206 11:47:11.019016  544774 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 11:47:11.019091  544774 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 11:47:11.019168  544774 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 11:47:11.019247  544774 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 11:47:11.019326  544774 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 11:47:11.019420  544774 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 11:47:11.019523  544774 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 11:47:11.019636  544774 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 11:47:11.019728  544774 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 11:47:11.130028  544774 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 11:47:11.130146  544774 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 11:47:11.130243  544774 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 11:47:11.142338  544774 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 11:47:11.145827  544774 out.go:252]   - Generating certificates and keys ...
	I1206 11:47:11.145966  544774 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 11:47:11.146041  544774 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 11:47:11.146129  544774 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1206 11:47:11.146219  544774 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1206 11:47:11.146323  544774 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1206 11:47:11.146389  544774 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1206 11:47:11.146475  544774 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1206 11:47:11.146557  544774 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1206 11:47:11.146636  544774 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1206 11:47:11.146718  544774 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1206 11:47:11.146790  544774 kubeadm.go:319] [certs] Using the existing "sa" key
	I1206 11:47:11.146916  544774 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 11:47:11.285016  544774 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 11:47:11.485749  544774 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 11:47:12.224415  544774 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 11:47:12.395933  544774 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 11:47:12.514467  544774 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 11:47:12.515524  544774 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 11:47:12.518501  544774 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 11:47:12.521727  544774 out.go:252]   - Booting up control plane ...
	I1206 11:47:12.521859  544774 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 11:47:12.521948  544774 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 11:47:12.523060  544774 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 11:47:12.541505  544774 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 11:47:12.541619  544774 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 11:47:12.554764  544774 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 11:47:12.555041  544774 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 11:47:12.555246  544774 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 11:47:12.723212  544774 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 11:47:12.723342  544774 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 11:51:12.724120  544774 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00132153s
	I1206 11:51:12.724177  544774 kubeadm.go:319] 
	I1206 11:51:12.724243  544774 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 11:51:12.724283  544774 kubeadm.go:319] 	- The kubelet is not running
	I1206 11:51:12.724394  544774 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 11:51:12.724403  544774 kubeadm.go:319] 
	I1206 11:51:12.724515  544774 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 11:51:12.724550  544774 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 11:51:12.724585  544774 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 11:51:12.724592  544774 kubeadm.go:319] 
	I1206 11:51:12.728928  544774 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 11:51:12.729358  544774 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 11:51:12.729471  544774 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 11:51:12.729710  544774 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 11:51:12.729720  544774 kubeadm.go:319] 
	I1206 11:51:12.729788  544774 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 11:51:12.729844  544774 kubeadm.go:403] duration metric: took 12m10.170877222s to StartCluster
	I1206 11:51:12.729883  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:51:12.729947  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:51:12.772973  544774 cri.go:89] found id: ""
	I1206 11:51:12.773000  544774 logs.go:282] 0 containers: []
	W1206 11:51:12.773009  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:51:12.773015  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:51:12.773076  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:51:12.827021  544774 cri.go:89] found id: ""
	I1206 11:51:12.827049  544774 logs.go:282] 0 containers: []
	W1206 11:51:12.827058  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:51:12.827065  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:51:12.827132  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:51:12.878403  544774 cri.go:89] found id: ""
	I1206 11:51:12.878426  544774 logs.go:282] 0 containers: []
	W1206 11:51:12.878433  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:51:12.878439  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:51:12.878502  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:51:12.933678  544774 cri.go:89] found id: ""
	I1206 11:51:12.933703  544774 logs.go:282] 0 containers: []
	W1206 11:51:12.933713  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:51:12.933720  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:51:12.933781  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:51:13.002629  544774 cri.go:89] found id: ""
	I1206 11:51:13.002655  544774 logs.go:282] 0 containers: []
	W1206 11:51:13.002664  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:51:13.002671  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:51:13.002739  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:51:13.039549  544774 cri.go:89] found id: ""
	I1206 11:51:13.039634  544774 logs.go:282] 0 containers: []
	W1206 11:51:13.039659  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:51:13.039677  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:51:13.039789  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:51:13.091734  544774 cri.go:89] found id: ""
	I1206 11:51:13.091815  544774 logs.go:282] 0 containers: []
	W1206 11:51:13.091838  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:51:13.091859  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:51:13.091976  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:51:13.137652  544774 cri.go:89] found id: ""
	I1206 11:51:13.137731  544774 logs.go:282] 0 containers: []
	W1206 11:51:13.137761  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:51:13.137787  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:51:13.137828  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:51:13.221234  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:51:13.221326  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:51:13.252676  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:51:13.252759  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:51:13.364828  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:51:13.364851  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:51:13.364864  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:51:13.399130  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:51:13.399163  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1206 11:51:13.429142  544774 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00132153s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 11:51:13.429196  544774 out.go:285] * 
	* 
	W1206 11:51:13.429249  544774 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00132153s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00132153s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 11:51:13.429266  544774 out.go:285] * 
	* 
	W1206 11:51:13.431476  544774 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 11:51:13.438425  544774 out.go:203] 
	W1206 11:51:13.441301  544774 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00132153s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00132153s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 11:51:13.441365  544774 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 11:51:13.441387  544774 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 11:51:13.444568  544774 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-linux-arm64 start -p kubernetes-upgrade-432995 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-432995 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-432995 version --output=json: exit status 1 (153.798628ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "33",
	    "gitVersion": "v1.33.2",
	    "gitCommit": "a57b6f7709f6c2722b92f07b8b4c48210a51fc40",
	    "gitTreeState": "clean",
	    "buildDate": "2025-06-17T18:41:31Z",
	    "goVersion": "go1.24.4",
	    "compiler": "gc",
	    "platform": "linux/arm64"
	  },
	  "kustomizeVersion": "v5.6.0"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.76.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-12-06 11:51:14.081871679 +0000 UTC m=+5132.560858731
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect kubernetes-upgrade-432995
helpers_test.go:243: (dbg) docker inspect kubernetes-upgrade-432995:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c67cc4c30dd783f173c3b13275194a85413786f72c3c2b814e510083790be4b0",
	        "Created": "2025-12-06T11:38:11.522970364Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 544905,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T11:38:48.187100376Z",
	            "FinishedAt": "2025-12-06T11:38:47.124448026Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/c67cc4c30dd783f173c3b13275194a85413786f72c3c2b814e510083790be4b0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c67cc4c30dd783f173c3b13275194a85413786f72c3c2b814e510083790be4b0/hostname",
	        "HostsPath": "/var/lib/docker/containers/c67cc4c30dd783f173c3b13275194a85413786f72c3c2b814e510083790be4b0/hosts",
	        "LogPath": "/var/lib/docker/containers/c67cc4c30dd783f173c3b13275194a85413786f72c3c2b814e510083790be4b0/c67cc4c30dd783f173c3b13275194a85413786f72c3c2b814e510083790be4b0-json.log",
	        "Name": "/kubernetes-upgrade-432995",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-432995:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-432995",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c67cc4c30dd783f173c3b13275194a85413786f72c3c2b814e510083790be4b0",
	                "LowerDir": "/var/lib/docker/overlay2/dd3d1eb6a6b609e3f582c956e191bdcb29f36434f7eac21232b10d7c2763a2a6-init/diff:/var/lib/docker/overlay2/5011226d55616c9977b14c1fe617d1302fe59373df05ce8ec6e21b79143a1c57/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dd3d1eb6a6b609e3f582c956e191bdcb29f36434f7eac21232b10d7c2763a2a6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dd3d1eb6a6b609e3f582c956e191bdcb29f36434f7eac21232b10d7c2763a2a6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dd3d1eb6a6b609e3f582c956e191bdcb29f36434f7eac21232b10d7c2763a2a6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-432995",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-432995/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-432995",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-432995",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-432995",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7c1a077cc317703f74d368b6059d4304e84885a007839e550b6cd03fb2597c72",
	            "SandboxKey": "/var/run/docker/netns/7c1a077cc317",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33388"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33389"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33392"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33390"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33391"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-432995": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "06:f1:d7:51:66:4c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e0d0971abecf37bf83995937337b8a8e024703a48001521b40af0bedd8f47380",
	                    "EndpointID": "ddaa3db20b4826a65b87e9cb1b275a5f929238b57712f88b760dd601b903a2f5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-432995",
	                        "c67cc4c30dd7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-432995 -n kubernetes-upgrade-432995
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-432995 -n kubernetes-upgrade-432995: exit status 2 (476.60336ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-432995 logs -n 25
helpers_test.go:260: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-365903 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-365903       │ jenkins │ v1.37.0 │ 06 Dec 25 11:37 UTC │ 06 Dec 25 11:37 UTC │
	│ start   │ -p missing-upgrade-887720 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ missing-upgrade-887720    │ jenkins │ v1.37.0 │ 06 Dec 25 11:37 UTC │ 06 Dec 25 11:38 UTC │
	│ delete  │ -p NoKubernetes-365903                                                                                                                          │ NoKubernetes-365903       │ jenkins │ v1.37.0 │ 06 Dec 25 11:37 UTC │ 06 Dec 25 11:37 UTC │
	│ start   │ -p NoKubernetes-365903 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-365903       │ jenkins │ v1.37.0 │ 06 Dec 25 11:37 UTC │ 06 Dec 25 11:37 UTC │
	│ ssh     │ -p NoKubernetes-365903 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-365903       │ jenkins │ v1.37.0 │ 06 Dec 25 11:37 UTC │                     │
	│ stop    │ -p NoKubernetes-365903                                                                                                                          │ NoKubernetes-365903       │ jenkins │ v1.37.0 │ 06 Dec 25 11:37 UTC │ 06 Dec 25 11:37 UTC │
	│ start   │ -p NoKubernetes-365903 --driver=docker  --container-runtime=crio                                                                                │ NoKubernetes-365903       │ jenkins │ v1.37.0 │ 06 Dec 25 11:37 UTC │ 06 Dec 25 11:38 UTC │
	│ ssh     │ -p NoKubernetes-365903 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-365903       │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │                     │
	│ delete  │ -p NoKubernetes-365903                                                                                                                          │ NoKubernetes-365903       │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ start   │ -p kubernetes-upgrade-432995 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio        │ kubernetes-upgrade-432995 │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ delete  │ -p missing-upgrade-887720                                                                                                                       │ missing-upgrade-887720    │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ start   │ -p stopped-upgrade-130351 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ stopped-upgrade-130351    │ jenkins │ v1.35.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:39 UTC │
	│ stop    │ -p kubernetes-upgrade-432995                                                                                                                    │ kubernetes-upgrade-432995 │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ start   │ -p kubernetes-upgrade-432995 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-432995 │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │                     │
	│ stop    │ stopped-upgrade-130351 stop                                                                                                                     │ stopped-upgrade-130351    │ jenkins │ v1.35.0 │ 06 Dec 25 11:39 UTC │ 06 Dec 25 11:39 UTC │
	│ start   │ -p stopped-upgrade-130351 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ stopped-upgrade-130351    │ jenkins │ v1.37.0 │ 06 Dec 25 11:39 UTC │ 06 Dec 25 11:43 UTC │
	│ delete  │ -p stopped-upgrade-130351                                                                                                                       │ stopped-upgrade-130351    │ jenkins │ v1.37.0 │ 06 Dec 25 11:43 UTC │ 06 Dec 25 11:43 UTC │
	│ start   │ -p running-upgrade-141321 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ running-upgrade-141321    │ jenkins │ v1.35.0 │ 06 Dec 25 11:43 UTC │ 06 Dec 25 11:44 UTC │
	│ start   │ -p running-upgrade-141321 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ running-upgrade-141321    │ jenkins │ v1.37.0 │ 06 Dec 25 11:44 UTC │ 06 Dec 25 11:48 UTC │
	│ delete  │ -p running-upgrade-141321                                                                                                                       │ running-upgrade-141321    │ jenkins │ v1.37.0 │ 06 Dec 25 11:48 UTC │ 06 Dec 25 11:48 UTC │
	│ start   │ -p pause-508007 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                       │ pause-508007              │ jenkins │ v1.37.0 │ 06 Dec 25 11:48 UTC │ 06 Dec 25 11:50 UTC │
	│ start   │ -p pause-508007 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                │ pause-508007              │ jenkins │ v1.37.0 │ 06 Dec 25 11:50 UTC │ 06 Dec 25 11:50 UTC │
	│ pause   │ -p pause-508007 --alsologtostderr -v=5                                                                                                          │ pause-508007              │ jenkins │ v1.37.0 │ 06 Dec 25 11:50 UTC │                     │
	│ delete  │ -p pause-508007                                                                                                                                 │ pause-508007              │ jenkins │ v1.37.0 │ 06 Dec 25 11:50 UTC │ 06 Dec 25 11:50 UTC │
	│ start   │ -p force-systemd-flag-551542 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                     │ force-systemd-flag-551542 │ jenkins │ v1.37.0 │ 06 Dec 25 11:50 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 11:50:46
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 11:50:46.335764  579048 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:50:46.335884  579048 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:50:46.335894  579048 out.go:374] Setting ErrFile to fd 2...
	I1206 11:50:46.335900  579048 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:50:46.336163  579048 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 11:50:46.336586  579048 out.go:368] Setting JSON to false
	I1206 11:50:46.337457  579048 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":12798,"bootTime":1765009049,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 11:50:46.337531  579048 start.go:143] virtualization:  
	I1206 11:50:46.341276  579048 out.go:179] * [force-systemd-flag-551542] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 11:50:46.346065  579048 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 11:50:46.346149  579048 notify.go:221] Checking for updates...
	I1206 11:50:46.352951  579048 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 11:50:46.356401  579048 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 11:50:46.359658  579048 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-362985/.minikube
	I1206 11:50:46.362981  579048 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 11:50:46.366180  579048 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 11:50:46.369970  579048 config.go:182] Loaded profile config "kubernetes-upgrade-432995": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 11:50:46.370099  579048 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 11:50:46.405474  579048 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 11:50:46.405617  579048 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:50:46.468359  579048 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:50:46.458371871 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:50:46.468464  579048 docker.go:319] overlay module found
	I1206 11:50:46.472001  579048 out.go:179] * Using the docker driver based on user configuration
	I1206 11:50:46.475063  579048 start.go:309] selected driver: docker
	I1206 11:50:46.475089  579048 start.go:927] validating driver "docker" against <nil>
	I1206 11:50:46.475103  579048 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 11:50:46.475996  579048 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:50:46.541347  579048 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:50:46.531133842 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:50:46.541518  579048 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 11:50:46.541740  579048 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1206 11:50:46.544845  579048 out.go:179] * Using Docker driver with root privileges
	I1206 11:50:46.547811  579048 cni.go:84] Creating CNI manager for ""
	I1206 11:50:46.547895  579048 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 11:50:46.547908  579048 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 11:50:46.548005  579048 start.go:353] cluster config:
	{Name:force-systemd-flag-551542 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-flag-551542 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:50:46.553008  579048 out.go:179] * Starting "force-systemd-flag-551542" primary control-plane node in "force-systemd-flag-551542" cluster
	I1206 11:50:46.556140  579048 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 11:50:46.559227  579048 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 11:50:46.562177  579048 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 11:50:46.562229  579048 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1206 11:50:46.562245  579048 cache.go:65] Caching tarball of preloaded images
	I1206 11:50:46.562267  579048 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 11:50:46.562330  579048 preload.go:238] Found /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1206 11:50:46.562340  579048 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1206 11:50:46.562450  579048 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/force-systemd-flag-551542/config.json ...
	I1206 11:50:46.562467  579048 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/force-systemd-flag-551542/config.json: {Name:mke0107e6ab17f98f1ea2a6fb993da78cbfb0f48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:50:46.582417  579048 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 11:50:46.582445  579048 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 11:50:46.582466  579048 cache.go:243] Successfully downloaded all kic artifacts
	I1206 11:50:46.582497  579048 start.go:360] acquireMachinesLock for force-systemd-flag-551542: {Name:mka91ebfc5fb0816537486157201cfed35cd25c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:50:46.582602  579048 start.go:364] duration metric: took 84.095µs to acquireMachinesLock for "force-systemd-flag-551542"
	I1206 11:50:46.582633  579048 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-551542 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-flag-551542 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 11:50:46.582699  579048 start.go:125] createHost starting for "" (driver="docker")
	I1206 11:50:46.586125  579048 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1206 11:50:46.586355  579048 start.go:159] libmachine.API.Create for "force-systemd-flag-551542" (driver="docker")
	I1206 11:50:46.586390  579048 client.go:173] LocalClient.Create starting
	I1206 11:50:46.586462  579048 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem
	I1206 11:50:46.586504  579048 main.go:143] libmachine: Decoding PEM data...
	I1206 11:50:46.586522  579048 main.go:143] libmachine: Parsing certificate...
	I1206 11:50:46.586587  579048 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem
	I1206 11:50:46.586614  579048 main.go:143] libmachine: Decoding PEM data...
	I1206 11:50:46.586629  579048 main.go:143] libmachine: Parsing certificate...
	I1206 11:50:46.587007  579048 cli_runner.go:164] Run: docker network inspect force-systemd-flag-551542 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1206 11:50:46.603351  579048 cli_runner.go:211] docker network inspect force-systemd-flag-551542 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1206 11:50:46.603458  579048 network_create.go:284] running [docker network inspect force-systemd-flag-551542] to gather additional debugging logs...
	I1206 11:50:46.603482  579048 cli_runner.go:164] Run: docker network inspect force-systemd-flag-551542
	W1206 11:50:46.620666  579048 cli_runner.go:211] docker network inspect force-systemd-flag-551542 returned with exit code 1
	I1206 11:50:46.620694  579048 network_create.go:287] error running [docker network inspect force-systemd-flag-551542]: docker network inspect force-systemd-flag-551542: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-551542 not found
	I1206 11:50:46.620709  579048 network_create.go:289] output of [docker network inspect force-systemd-flag-551542]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-551542 not found
	
	** /stderr **
	I1206 11:50:46.620821  579048 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 11:50:46.637927  579048 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a2e57973c06f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:70:7c:95:fc:30} reservation:<nil>}
	I1206 11:50:46.638396  579048 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-aab1d3b47840 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:fa:c0:5e:d0:89:00} reservation:<nil>}
	I1206 11:50:46.638718  579048 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-a4ecdf36c280 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:82:55:ed:59:c8:58} reservation:<nil>}
	I1206 11:50:46.639064  579048 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e0d0971abecf IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:92:bd:15:fa:40:b4} reservation:<nil>}
	I1206 11:50:46.639610  579048 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a321c0}
	I1206 11:50:46.639633  579048 network_create.go:124] attempt to create docker network force-systemd-flag-551542 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1206 11:50:46.639693  579048 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-551542 force-systemd-flag-551542
	I1206 11:50:46.700493  579048 network_create.go:108] docker network force-systemd-flag-551542 192.168.85.0/24 created
	I1206 11:50:46.700526  579048 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-551542" container
	I1206 11:50:46.700617  579048 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1206 11:50:46.717286  579048 cli_runner.go:164] Run: docker volume create force-systemd-flag-551542 --label name.minikube.sigs.k8s.io=force-systemd-flag-551542 --label created_by.minikube.sigs.k8s.io=true
	I1206 11:50:46.735086  579048 oci.go:103] Successfully created a docker volume force-systemd-flag-551542
	I1206 11:50:46.735177  579048 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-551542-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-551542 --entrypoint /usr/bin/test -v force-systemd-flag-551542:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -d /var/lib
	I1206 11:50:47.290074  579048 oci.go:107] Successfully prepared a docker volume force-systemd-flag-551542
	I1206 11:50:47.290149  579048 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 11:50:47.290165  579048 kic.go:194] Starting extracting preloaded images to volume ...
	I1206 11:50:47.290236  579048 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-551542:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir
	I1206 11:50:51.447723  579048 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-551542:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 -I lz4 -xf /preloaded.tar -C /extractDir: (4.157438346s)
	I1206 11:50:51.447761  579048 kic.go:203] duration metric: took 4.157592201s to extract preloaded images to volume ...
	W1206 11:50:51.447909  579048 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1206 11:50:51.448032  579048 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1206 11:50:51.526496  579048 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-551542 --name force-systemd-flag-551542 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-551542 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-551542 --network force-systemd-flag-551542 --ip 192.168.85.2 --volume force-systemd-flag-551542:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164
	I1206 11:50:51.847310  579048 cli_runner.go:164] Run: docker container inspect force-systemd-flag-551542 --format={{.State.Running}}
	I1206 11:50:51.865410  579048 cli_runner.go:164] Run: docker container inspect force-systemd-flag-551542 --format={{.State.Status}}
	I1206 11:50:51.885796  579048 cli_runner.go:164] Run: docker exec force-systemd-flag-551542 stat /var/lib/dpkg/alternatives/iptables
	I1206 11:50:51.939522  579048 oci.go:144] the created container "force-systemd-flag-551542" has a running status.
	I1206 11:50:51.939552  579048 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22047-362985/.minikube/machines/force-systemd-flag-551542/id_rsa...
	I1206 11:50:52.337933  579048 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/machines/force-systemd-flag-551542/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1206 11:50:52.338037  579048 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22047-362985/.minikube/machines/force-systemd-flag-551542/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1206 11:50:52.364517  579048 cli_runner.go:164] Run: docker container inspect force-systemd-flag-551542 --format={{.State.Status}}
	I1206 11:50:52.394658  579048 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1206 11:50:52.394677  579048 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-551542 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1206 11:50:52.475361  579048 cli_runner.go:164] Run: docker container inspect force-systemd-flag-551542 --format={{.State.Status}}
	I1206 11:50:52.494588  579048 machine.go:94] provisionDockerMachine start ...
	I1206 11:50:52.494699  579048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-551542
	I1206 11:50:52.517240  579048 main.go:143] libmachine: Using SSH client type: native
	I1206 11:50:52.517595  579048 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1206 11:50:52.517612  579048 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 11:50:52.518297  579048 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48496->127.0.0.1:33408: read: connection reset by peer
	I1206 11:50:55.671001  579048 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-551542
	
	I1206 11:50:55.671027  579048 ubuntu.go:182] provisioning hostname "force-systemd-flag-551542"
	I1206 11:50:55.671102  579048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-551542
	I1206 11:50:55.688865  579048 main.go:143] libmachine: Using SSH client type: native
	I1206 11:50:55.689193  579048 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1206 11:50:55.689210  579048 main.go:143] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-551542 && echo "force-systemd-flag-551542" | sudo tee /etc/hostname
	I1206 11:50:55.853283  579048 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-551542
	
	I1206 11:50:55.853360  579048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-551542
	I1206 11:50:55.871549  579048 main.go:143] libmachine: Using SSH client type: native
	I1206 11:50:55.871863  579048 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1206 11:50:55.871886  579048 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-551542' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-551542/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-551542' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 11:50:56.024100  579048 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 11:50:56.024190  579048 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-362985/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-362985/.minikube}
	I1206 11:50:56.024263  579048 ubuntu.go:190] setting up certificates
	I1206 11:50:56.024286  579048 provision.go:84] configureAuth start
	I1206 11:50:56.024380  579048 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-551542
	I1206 11:50:56.045811  579048 provision.go:143] copyHostCerts
	I1206 11:50:56.045856  579048 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem
	I1206 11:50:56.045889  579048 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem, removing ...
	I1206 11:50:56.045896  579048 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem
	I1206 11:50:56.045975  579048 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem (1082 bytes)
	I1206 11:50:56.046061  579048 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem
	I1206 11:50:56.046079  579048 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem, removing ...
	I1206 11:50:56.046084  579048 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem
	I1206 11:50:56.046112  579048 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem (1123 bytes)
	I1206 11:50:56.046154  579048 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem
	I1206 11:50:56.046174  579048 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem, removing ...
	I1206 11:50:56.046178  579048 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem
	I1206 11:50:56.046201  579048 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem (1679 bytes)
	I1206 11:50:56.046245  579048 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-551542 san=[127.0.0.1 192.168.85.2 force-systemd-flag-551542 localhost minikube]
	I1206 11:50:56.619106  579048 provision.go:177] copyRemoteCerts
	I1206 11:50:56.619190  579048 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 11:50:56.619244  579048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-551542
	I1206 11:50:56.638651  579048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/force-systemd-flag-551542/id_rsa Username:docker}
	I1206 11:50:56.745401  579048 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1206 11:50:56.745460  579048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 11:50:56.767774  579048 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1206 11:50:56.767832  579048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1206 11:50:56.788875  579048 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1206 11:50:56.788943  579048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 11:50:56.806861  579048 provision.go:87] duration metric: took 782.540181ms to configureAuth
	I1206 11:50:56.806887  579048 ubuntu.go:206] setting minikube options for container-runtime
	I1206 11:50:56.807072  579048 config.go:182] Loaded profile config "force-systemd-flag-551542": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 11:50:56.807171  579048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-551542
	I1206 11:50:56.824360  579048 main.go:143] libmachine: Using SSH client type: native
	I1206 11:50:56.824677  579048 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1206 11:50:56.824697  579048 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 11:50:57.128359  579048 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 11:50:57.128380  579048 machine.go:97] duration metric: took 4.633767774s to provisionDockerMachine
	I1206 11:50:57.128391  579048 client.go:176] duration metric: took 10.541989431s to LocalClient.Create
	I1206 11:50:57.128405  579048 start.go:167] duration metric: took 10.542051931s to libmachine.API.Create "force-systemd-flag-551542"
	I1206 11:50:57.128412  579048 start.go:293] postStartSetup for "force-systemd-flag-551542" (driver="docker")
	I1206 11:50:57.128422  579048 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 11:50:57.128488  579048 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 11:50:57.128529  579048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-551542
	I1206 11:50:57.145719  579048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/force-systemd-flag-551542/id_rsa Username:docker}
	I1206 11:50:57.251712  579048 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 11:50:57.255187  579048 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 11:50:57.255216  579048 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 11:50:57.255227  579048 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-362985/.minikube/addons for local assets ...
	I1206 11:50:57.255289  579048 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-362985/.minikube/files for local assets ...
	I1206 11:50:57.255405  579048 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem -> 3648552.pem in /etc/ssl/certs
	I1206 11:50:57.255418  579048 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem -> /etc/ssl/certs/3648552.pem
	I1206 11:50:57.255529  579048 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 11:50:57.263342  579048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem --> /etc/ssl/certs/3648552.pem (1708 bytes)
	I1206 11:50:57.282384  579048 start.go:296] duration metric: took 153.955475ms for postStartSetup
	I1206 11:50:57.282839  579048 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-551542
	I1206 11:50:57.299943  579048 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/force-systemd-flag-551542/config.json ...
	I1206 11:50:57.300237  579048 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 11:50:57.300290  579048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-551542
	I1206 11:50:57.317134  579048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/force-systemd-flag-551542/id_rsa Username:docker}
	I1206 11:50:57.420473  579048 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 11:50:57.424917  579048 start.go:128] duration metric: took 10.842196308s to createHost
	I1206 11:50:57.424945  579048 start.go:83] releasing machines lock for "force-systemd-flag-551542", held for 10.842329093s
	I1206 11:50:57.425017  579048 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-551542
	I1206 11:50:57.453363  579048 ssh_runner.go:195] Run: cat /version.json
	I1206 11:50:57.453419  579048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-551542
	I1206 11:50:57.453645  579048 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 11:50:57.453704  579048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-551542
	I1206 11:50:57.485168  579048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/force-systemd-flag-551542/id_rsa Username:docker}
	I1206 11:50:57.491744  579048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/force-systemd-flag-551542/id_rsa Username:docker}
	I1206 11:50:57.599567  579048 ssh_runner.go:195] Run: systemctl --version
	I1206 11:50:57.694772  579048 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 11:50:57.741526  579048 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 11:50:57.746155  579048 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 11:50:57.746230  579048 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 11:50:57.775967  579048 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1206 11:50:57.775989  579048 start.go:496] detecting cgroup driver to use...
	I1206 11:50:57.776002  579048 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1206 11:50:57.776057  579048 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 11:50:57.794051  579048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 11:50:57.807570  579048 docker.go:218] disabling cri-docker service (if available) ...
	I1206 11:50:57.807661  579048 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 11:50:57.825769  579048 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 11:50:57.844514  579048 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 11:50:57.973067  579048 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 11:50:58.104267  579048 docker.go:234] disabling docker service ...
	I1206 11:50:58.104347  579048 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 11:50:58.125799  579048 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 11:50:58.139300  579048 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 11:50:58.262515  579048 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 11:50:58.384009  579048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 11:50:58.397070  579048 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 11:50:58.412285  579048 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 11:50:58.412351  579048 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:50:58.421032  579048 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1206 11:50:58.421107  579048 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:50:58.429944  579048 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:50:58.438947  579048 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:50:58.447776  579048 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 11:50:58.456096  579048 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:50:58.464945  579048 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:50:58.478796  579048 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:50:58.487988  579048 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 11:50:58.495816  579048 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 11:50:58.503548  579048 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:50:58.619457  579048 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 11:50:58.804566  579048 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 11:50:58.804702  579048 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 11:50:58.808739  579048 start.go:564] Will wait 60s for crictl version
	I1206 11:50:58.808809  579048 ssh_runner.go:195] Run: which crictl
	I1206 11:50:58.812454  579048 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 11:50:58.837523  579048 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 11:50:58.837613  579048 ssh_runner.go:195] Run: crio --version
	I1206 11:50:58.867125  579048 ssh_runner.go:195] Run: crio --version
	I1206 11:50:58.901424  579048 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1206 11:50:58.904358  579048 cli_runner.go:164] Run: docker network inspect force-systemd-flag-551542 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 11:50:58.920734  579048 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1206 11:50:58.924535  579048 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 11:50:58.934592  579048 kubeadm.go:884] updating cluster {Name:force-systemd-flag-551542 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-flag-551542 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 11:50:58.934708  579048 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 11:50:58.934766  579048 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:50:58.967504  579048 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 11:50:58.967531  579048 crio.go:433] Images already preloaded, skipping extraction
	I1206 11:50:58.967587  579048 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:50:59.006472  579048 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 11:50:59.006515  579048 cache_images.go:86] Images are preloaded, skipping loading
	I1206 11:50:59.006525  579048 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1206 11:50:59.006643  579048 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-flag-551542 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:force-systemd-flag-551542 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 11:50:59.006754  579048 ssh_runner.go:195] Run: crio config
	I1206 11:50:59.071411  579048 cni.go:84] Creating CNI manager for ""
	I1206 11:50:59.071441  579048 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 11:50:59.071494  579048 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 11:50:59.071526  579048 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-551542 NodeName:force-systemd-flag-551542 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 11:50:59.071728  579048 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-551542"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 11:50:59.071843  579048 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 11:50:59.081088  579048 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 11:50:59.081198  579048 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 11:50:59.089052  579048 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1206 11:50:59.102695  579048 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 11:50:59.115920  579048 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1206 11:50:59.129098  579048 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1206 11:50:59.132745  579048 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1206 11:50:59.142501  579048 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:50:59.264449  579048 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 11:50:59.282275  579048 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/force-systemd-flag-551542 for IP: 192.168.85.2
	I1206 11:50:59.282351  579048 certs.go:195] generating shared ca certs ...
	I1206 11:50:59.282383  579048 certs.go:227] acquiring lock for ca certs: {Name:mke2ec61a37b6f3abbcbeb9abd23d6a19d011dd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:50:59.282616  579048 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.key
	I1206 11:50:59.282712  579048 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.key
	I1206 11:50:59.282757  579048 certs.go:257] generating profile certs ...
	I1206 11:50:59.282846  579048 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/force-systemd-flag-551542/client.key
	I1206 11:50:59.282890  579048 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/force-systemd-flag-551542/client.crt with IP's: []
	I1206 11:50:59.918193  579048 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/force-systemd-flag-551542/client.crt ...
	I1206 11:50:59.918229  579048 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/force-systemd-flag-551542/client.crt: {Name:mk64b7f92c4af5fe464ddf6a9ab1b6deceea67ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:50:59.918461  579048 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/force-systemd-flag-551542/client.key ...
	I1206 11:50:59.918479  579048 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/force-systemd-flag-551542/client.key: {Name:mk2f78615d77da6b2c511ccc56b10d6dd37c4a92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:50:59.918581  579048 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/force-systemd-flag-551542/apiserver.key.d38d7f06
	I1206 11:50:59.918600  579048 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/force-systemd-flag-551542/apiserver.crt.d38d7f06 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1206 11:51:00.505276  579048 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/force-systemd-flag-551542/apiserver.crt.d38d7f06 ...
	I1206 11:51:00.505312  579048 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/force-systemd-flag-551542/apiserver.crt.d38d7f06: {Name:mk14380898da5c2eeefdab51a26b95ac296533dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:51:00.505494  579048 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/force-systemd-flag-551542/apiserver.key.d38d7f06 ...
	I1206 11:51:00.505511  579048 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/force-systemd-flag-551542/apiserver.key.d38d7f06: {Name:mk0c9dd362bfdf44e2765b455f4d8a4f235f48e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:51:00.505589  579048 certs.go:382] copying /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/force-systemd-flag-551542/apiserver.crt.d38d7f06 -> /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/force-systemd-flag-551542/apiserver.crt
	I1206 11:51:00.505672  579048 certs.go:386] copying /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/force-systemd-flag-551542/apiserver.key.d38d7f06 -> /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/force-systemd-flag-551542/apiserver.key
	I1206 11:51:00.505738  579048 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/force-systemd-flag-551542/proxy-client.key
	I1206 11:51:00.505759  579048 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/force-systemd-flag-551542/proxy-client.crt with IP's: []
	I1206 11:51:00.791108  579048 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/force-systemd-flag-551542/proxy-client.crt ...
	I1206 11:51:00.791140  579048 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/force-systemd-flag-551542/proxy-client.crt: {Name:mkf826f1ee403dd8a6eeeadd2eea83d6800f63aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:51:00.791332  579048 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/force-systemd-flag-551542/proxy-client.key ...
	I1206 11:51:00.791347  579048 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/force-systemd-flag-551542/proxy-client.key: {Name:mka697fd1200e390387eebe05938f2612bd80636 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:51:00.791470  579048 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1206 11:51:00.791494  579048 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1206 11:51:00.791507  579048 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1206 11:51:00.791527  579048 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1206 11:51:00.791539  579048 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/force-systemd-flag-551542/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1206 11:51:00.791558  579048 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/force-systemd-flag-551542/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1206 11:51:00.791574  579048 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/force-systemd-flag-551542/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1206 11:51:00.791586  579048 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/force-systemd-flag-551542/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1206 11:51:00.791641  579048 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855.pem (1338 bytes)
	W1206 11:51:00.791690  579048 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855_empty.pem, impossibly tiny 0 bytes
	I1206 11:51:00.791703  579048 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 11:51:00.791735  579048 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem (1082 bytes)
	I1206 11:51:00.791765  579048 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem (1123 bytes)
	I1206 11:51:00.791794  579048 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem (1679 bytes)
	I1206 11:51:00.791845  579048 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem (1708 bytes)
	I1206 11:51:00.791882  579048 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:51:00.791899  579048 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855.pem -> /usr/share/ca-certificates/364855.pem
	I1206 11:51:00.791911  579048 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem -> /usr/share/ca-certificates/3648552.pem
	I1206 11:51:00.792426  579048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 11:51:00.814464  579048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 11:51:00.836596  579048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 11:51:00.856643  579048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 11:51:00.874929  579048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/force-systemd-flag-551542/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1206 11:51:00.893231  579048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/force-systemd-flag-551542/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1206 11:51:00.911225  579048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/force-systemd-flag-551542/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 11:51:00.929896  579048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/force-systemd-flag-551542/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 11:51:00.948404  579048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 11:51:00.966983  579048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855.pem --> /usr/share/ca-certificates/364855.pem (1338 bytes)
	I1206 11:51:00.989602  579048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem --> /usr/share/ca-certificates/3648552.pem (1708 bytes)
	I1206 11:51:01.011973  579048 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 11:51:01.028146  579048 ssh_runner.go:195] Run: openssl version
	I1206 11:51:01.036392  579048 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:51:01.044361  579048 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 11:51:01.052704  579048 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:51:01.056897  579048 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:26 /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:51:01.056962  579048 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:51:01.098372  579048 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 11:51:01.106404  579048 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1206 11:51:01.114273  579048 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364855.pem
	I1206 11:51:01.122426  579048 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364855.pem /etc/ssl/certs/364855.pem
	I1206 11:51:01.130306  579048 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364855.pem
	I1206 11:51:01.134263  579048 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:36 /usr/share/ca-certificates/364855.pem
	I1206 11:51:01.134340  579048 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364855.pem
	I1206 11:51:01.176106  579048 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 11:51:01.184758  579048 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/364855.pem /etc/ssl/certs/51391683.0
	I1206 11:51:01.193273  579048 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648552.pem
	I1206 11:51:01.201879  579048 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648552.pem /etc/ssl/certs/3648552.pem
	I1206 11:51:01.211711  579048 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648552.pem
	I1206 11:51:01.215838  579048 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:36 /usr/share/ca-certificates/3648552.pem
	I1206 11:51:01.215906  579048 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648552.pem
	I1206 11:51:01.265548  579048 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 11:51:01.275985  579048 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3648552.pem /etc/ssl/certs/3ec20f2e.0
	I1206 11:51:01.285477  579048 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 11:51:01.290201  579048 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1206 11:51:01.290255  579048 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-551542 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-flag-551542 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:51:01.290339  579048 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 11:51:01.290406  579048 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 11:51:01.333862  579048 cri.go:89] found id: ""
	I1206 11:51:01.333942  579048 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 11:51:01.342995  579048 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1206 11:51:01.351423  579048 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1206 11:51:01.351538  579048 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1206 11:51:01.359826  579048 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1206 11:51:01.359846  579048 kubeadm.go:158] found existing configuration files:
	
	I1206 11:51:01.359933  579048 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1206 11:51:01.368320  579048 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1206 11:51:01.368415  579048 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1206 11:51:01.376148  579048 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1206 11:51:01.384456  579048 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1206 11:51:01.384548  579048 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1206 11:51:01.392301  579048 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1206 11:51:01.400721  579048 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1206 11:51:01.400812  579048 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1206 11:51:01.408771  579048 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1206 11:51:01.416896  579048 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1206 11:51:01.416963  579048 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1206 11:51:01.424713  579048 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1206 11:51:01.466233  579048 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1206 11:51:01.466544  579048 kubeadm.go:319] [preflight] Running pre-flight checks
	I1206 11:51:01.491487  579048 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1206 11:51:01.491637  579048 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1206 11:51:01.491693  579048 kubeadm.go:319] OS: Linux
	I1206 11:51:01.491766  579048 kubeadm.go:319] CGROUPS_CPU: enabled
	I1206 11:51:01.491839  579048 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1206 11:51:01.491915  579048 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1206 11:51:01.491987  579048 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1206 11:51:01.492060  579048 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1206 11:51:01.492136  579048 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1206 11:51:01.492211  579048 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1206 11:51:01.492283  579048 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1206 11:51:01.492374  579048 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1206 11:51:01.562205  579048 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1206 11:51:01.562380  579048 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1206 11:51:01.562479  579048 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1206 11:51:01.575776  579048 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1206 11:51:01.579130  579048 out.go:252]   - Generating certificates and keys ...
	I1206 11:51:01.579296  579048 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1206 11:51:01.579432  579048 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1206 11:51:01.819121  579048 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1206 11:51:02.320226  579048 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1206 11:51:02.648655  579048 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1206 11:51:03.227583  579048 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1206 11:51:03.514810  579048 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1206 11:51:03.515062  579048 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-551542 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1206 11:51:03.868396  579048 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1206 11:51:03.868747  579048 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-551542 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1206 11:51:04.106460  579048 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1206 11:51:04.233153  579048 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1206 11:51:04.427915  579048 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1206 11:51:04.428208  579048 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1206 11:51:04.731784  579048 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1206 11:51:04.875913  579048 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1206 11:51:06.075537  579048 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1206 11:51:06.228050  579048 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1206 11:51:07.438103  579048 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1206 11:51:07.438666  579048 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1206 11:51:07.441300  579048 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1206 11:51:07.444659  579048 out.go:252]   - Booting up control plane ...
	I1206 11:51:07.444770  579048 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1206 11:51:07.444860  579048 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1206 11:51:07.445017  579048 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1206 11:51:07.463227  579048 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1206 11:51:07.463336  579048 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1206 11:51:07.470561  579048 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1206 11:51:07.470900  579048 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1206 11:51:07.470948  579048 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1206 11:51:07.615718  579048 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1206 11:51:07.615838  579048 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1206 11:51:08.610295  579048 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001767951s
	I1206 11:51:08.613729  579048 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1206 11:51:08.613822  579048 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1206 11:51:08.613912  579048 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1206 11:51:08.614227  579048 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1206 11:51:12.724120  544774 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.00132153s
	I1206 11:51:12.724177  544774 kubeadm.go:319] 
	I1206 11:51:12.724243  544774 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1206 11:51:12.724283  544774 kubeadm.go:319] 	- The kubelet is not running
	I1206 11:51:12.724394  544774 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1206 11:51:12.724403  544774 kubeadm.go:319] 
	I1206 11:51:12.724515  544774 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1206 11:51:12.724550  544774 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1206 11:51:12.724585  544774 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1206 11:51:12.724592  544774 kubeadm.go:319] 
	I1206 11:51:12.728928  544774 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1206 11:51:12.729358  544774 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1206 11:51:12.729471  544774 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1206 11:51:12.729710  544774 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1206 11:51:12.729720  544774 kubeadm.go:319] 
	I1206 11:51:12.729788  544774 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1206 11:51:12.729844  544774 kubeadm.go:403] duration metric: took 12m10.170877222s to StartCluster
	I1206 11:51:12.729883  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1206 11:51:12.729947  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1206 11:51:12.772973  544774 cri.go:89] found id: ""
	I1206 11:51:12.773000  544774 logs.go:282] 0 containers: []
	W1206 11:51:12.773009  544774 logs.go:284] No container was found matching "kube-apiserver"
	I1206 11:51:12.773015  544774 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1206 11:51:12.773076  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1206 11:51:12.827021  544774 cri.go:89] found id: ""
	I1206 11:51:12.827049  544774 logs.go:282] 0 containers: []
	W1206 11:51:12.827058  544774 logs.go:284] No container was found matching "etcd"
	I1206 11:51:12.827065  544774 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1206 11:51:12.827132  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1206 11:51:12.878403  544774 cri.go:89] found id: ""
	I1206 11:51:12.878426  544774 logs.go:282] 0 containers: []
	W1206 11:51:12.878433  544774 logs.go:284] No container was found matching "coredns"
	I1206 11:51:12.878439  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1206 11:51:12.878502  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1206 11:51:12.933678  544774 cri.go:89] found id: ""
	I1206 11:51:12.933703  544774 logs.go:282] 0 containers: []
	W1206 11:51:12.933713  544774 logs.go:284] No container was found matching "kube-scheduler"
	I1206 11:51:12.933720  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1206 11:51:12.933781  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1206 11:51:13.002629  544774 cri.go:89] found id: ""
	I1206 11:51:13.002655  544774 logs.go:282] 0 containers: []
	W1206 11:51:13.002664  544774 logs.go:284] No container was found matching "kube-proxy"
	I1206 11:51:13.002671  544774 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1206 11:51:13.002739  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1206 11:51:13.039549  544774 cri.go:89] found id: ""
	I1206 11:51:13.039634  544774 logs.go:282] 0 containers: []
	W1206 11:51:13.039659  544774 logs.go:284] No container was found matching "kube-controller-manager"
	I1206 11:51:13.039677  544774 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1206 11:51:13.039789  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1206 11:51:13.091734  544774 cri.go:89] found id: ""
	I1206 11:51:13.091815  544774 logs.go:282] 0 containers: []
	W1206 11:51:13.091838  544774 logs.go:284] No container was found matching "kindnet"
	I1206 11:51:13.091859  544774 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1206 11:51:13.091976  544774 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1206 11:51:13.137652  544774 cri.go:89] found id: ""
	I1206 11:51:13.137731  544774 logs.go:282] 0 containers: []
	W1206 11:51:13.137761  544774 logs.go:284] No container was found matching "storage-provisioner"
	I1206 11:51:13.137787  544774 logs.go:123] Gathering logs for kubelet ...
	I1206 11:51:13.137828  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1206 11:51:13.221234  544774 logs.go:123] Gathering logs for dmesg ...
	I1206 11:51:13.221326  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1206 11:51:13.252676  544774 logs.go:123] Gathering logs for describe nodes ...
	I1206 11:51:13.252759  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1206 11:51:13.364828  544774 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1206 11:51:13.364851  544774 logs.go:123] Gathering logs for CRI-O ...
	I1206 11:51:13.364864  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1206 11:51:13.399130  544774 logs.go:123] Gathering logs for container status ...
	I1206 11:51:13.399163  544774 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1206 11:51:13.429142  544774 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00132153s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1206 11:51:13.429196  544774 out.go:285] * 
	W1206 11:51:13.429249  544774 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00132153s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 11:51:13.429266  544774 out.go:285] * 
	W1206 11:51:13.431476  544774 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 11:51:13.438425  544774 out.go:203] 
	W1206 11:51:13.441301  544774 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.00132153s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1206 11:51:13.441365  544774 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1206 11:51:13.441387  544774 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1206 11:51:13.444568  544774 out.go:203] 
	
	
	==> CRI-O <==
	Dec 06 11:38:54 kubernetes-upgrade-432995 crio[616]: time="2025-12-06T11:38:54.824531391Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 06 11:38:54 kubernetes-upgrade-432995 crio[616]: time="2025-12-06T11:38:54.824703134Z" level=info msg="Starting seccomp notifier watcher"
	Dec 06 11:38:54 kubernetes-upgrade-432995 crio[616]: time="2025-12-06T11:38:54.824828017Z" level=info msg="Create NRI interface"
	Dec 06 11:38:54 kubernetes-upgrade-432995 crio[616]: time="2025-12-06T11:38:54.825011199Z" level=info msg="built-in NRI default validator is disabled"
	Dec 06 11:38:54 kubernetes-upgrade-432995 crio[616]: time="2025-12-06T11:38:54.825082133Z" level=info msg="runtime interface created"
	Dec 06 11:38:54 kubernetes-upgrade-432995 crio[616]: time="2025-12-06T11:38:54.825231845Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 06 11:38:54 kubernetes-upgrade-432995 crio[616]: time="2025-12-06T11:38:54.825310763Z" level=info msg="runtime interface starting up..."
	Dec 06 11:38:54 kubernetes-upgrade-432995 crio[616]: time="2025-12-06T11:38:54.825367116Z" level=info msg="starting plugins..."
	Dec 06 11:38:54 kubernetes-upgrade-432995 crio[616]: time="2025-12-06T11:38:54.825441841Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 06 11:38:54 kubernetes-upgrade-432995 crio[616]: time="2025-12-06T11:38:54.825565526Z" level=info msg="No systemd watchdog enabled"
	Dec 06 11:38:54 kubernetes-upgrade-432995 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 06 11:43:08 kubernetes-upgrade-432995 crio[616]: time="2025-12-06T11:43:08.543469671Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=51761afe-7460-4f10-a688-b81570dd9f37 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:43:08 kubernetes-upgrade-432995 crio[616]: time="2025-12-06T11:43:08.544650455Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=59dd5f5b-b14d-4c30-b98f-c52f393862e9 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:43:08 kubernetes-upgrade-432995 crio[616]: time="2025-12-06T11:43:08.545331412Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=37128049-dc33-4e12-b0f3-269e1a06f2ea name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:43:08 kubernetes-upgrade-432995 crio[616]: time="2025-12-06T11:43:08.546323698Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=154b27bf-e5df-4319-9bdb-47ad2eeda1bc name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:43:08 kubernetes-upgrade-432995 crio[616]: time="2025-12-06T11:43:08.549455495Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=1d2a3bba-0e8d-4875-8269-b8475299c7c8 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:43:08 kubernetes-upgrade-432995 crio[616]: time="2025-12-06T11:43:08.5502956Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=2b88c1ff-f4fc-4168-978b-db1d43b4f37e name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:43:08 kubernetes-upgrade-432995 crio[616]: time="2025-12-06T11:43:08.550868092Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=56b00f4b-5bb6-4c3a-8924-ee0bf5a23609 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:47:11 kubernetes-upgrade-432995 crio[616]: time="2025-12-06T11:47:11.134381672Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=b1fab438-bb11-46d3-bed3-779310270997 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:47:11 kubernetes-upgrade-432995 crio[616]: time="2025-12-06T11:47:11.135795952Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=b07c7a20-1356-4353-88c2-9b71eaf0beac name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:47:11 kubernetes-upgrade-432995 crio[616]: time="2025-12-06T11:47:11.136522218Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=e2f71b79-1392-4930-8a0b-da56d2889231 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:47:11 kubernetes-upgrade-432995 crio[616]: time="2025-12-06T11:47:11.137153469Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=ad9f6ff2-c938-42b2-8737-7ce17242b371 name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:47:11 kubernetes-upgrade-432995 crio[616]: time="2025-12-06T11:47:11.137751143Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=0b17c8f6-0239-4ef3-88da-77b48a1b8fcf name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:47:11 kubernetes-upgrade-432995 crio[616]: time="2025-12-06T11:47:11.138336682Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=5e13abae-44aa-439e-8614-125eb2887edf name=/runtime.v1.ImageService/ImageStatus
	Dec 06 11:47:11 kubernetes-upgrade-432995 crio[616]: time="2025-12-06T11:47:11.138971641Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=75a9a64c-fcc8-4edc-9f6d-4840d0a089b3 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[ +35.228669] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:15] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:16] overlayfs: idmapped layers are currently not supported
	[  +4.168000] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:17] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:18] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:19] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:24] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:25] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:26] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:27] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:28] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:29] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:30] overlayfs: idmapped layers are currently not supported
	[  +6.342378] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:31] overlayfs: idmapped layers are currently not supported
	[ +25.558454] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:32] overlayfs: idmapped layers are currently not supported
	[ +27.925408] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:33] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:34] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:37] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:38] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:49] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:51] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 11:51:15 up  3:33,  0 user,  load average: 3.02, 2.00, 1.94
	Linux kubernetes-upgrade-432995 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 06 11:51:13 kubernetes-upgrade-432995 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:51:13 kubernetes-upgrade-432995 kubelet[12355]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:51:13 kubernetes-upgrade-432995 kubelet[12355]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:51:13 kubernetes-upgrade-432995 kubelet[12355]: E1206 11:51:13.348556   12355 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:51:13 kubernetes-upgrade-432995 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:51:13 kubernetes-upgrade-432995 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:51:13 kubernetes-upgrade-432995 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 964.
	Dec 06 11:51:13 kubernetes-upgrade-432995 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:51:13 kubernetes-upgrade-432995 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:51:14 kubernetes-upgrade-432995 kubelet[12381]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:51:14 kubernetes-upgrade-432995 kubelet[12381]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:51:14 kubernetes-upgrade-432995 kubelet[12381]: E1206 11:51:14.055960   12381 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:51:14 kubernetes-upgrade-432995 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:51:14 kubernetes-upgrade-432995 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:51:14 kubernetes-upgrade-432995 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 965.
	Dec 06 11:51:14 kubernetes-upgrade-432995 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:51:14 kubernetes-upgrade-432995 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:51:14 kubernetes-upgrade-432995 kubelet[12401]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:51:14 kubernetes-upgrade-432995 kubelet[12401]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 06 11:51:14 kubernetes-upgrade-432995 kubelet[12401]: E1206 11:51:14.799316   12401 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 06 11:51:14 kubernetes-upgrade-432995 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 06 11:51:14 kubernetes-upgrade-432995 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 06 11:51:15 kubernetes-upgrade-432995 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 966.
	Dec 06 11:51:15 kubernetes-upgrade-432995 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 06 11:51:15 kubernetes-upgrade-432995 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-432995 -n kubernetes-upgrade-432995
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-432995 -n kubernetes-upgrade-432995: exit status 2 (414.309963ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "kubernetes-upgrade-432995" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-432995" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-432995
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-432995: (2.66171093s)
--- FAIL: TestKubernetesUpgrade (793.87s)

                                                
                                    
x
+
TestPause/serial/Pause (6.83s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-508007 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-508007 --alsologtostderr -v=5: exit status 80 (2.151151572s)

                                                
                                                
-- stdout --
	* Pausing node pause-508007 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 11:50:37.053772  577576 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:50:37.054478  577576 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:50:37.054494  577576 out.go:374] Setting ErrFile to fd 2...
	I1206 11:50:37.054501  577576 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:50:37.054812  577576 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 11:50:37.055118  577576 out.go:368] Setting JSON to false
	I1206 11:50:37.055165  577576 mustload.go:66] Loading cluster: pause-508007
	I1206 11:50:37.055734  577576 config.go:182] Loaded profile config "pause-508007": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 11:50:37.056489  577576 cli_runner.go:164] Run: docker container inspect pause-508007 --format={{.State.Status}}
	I1206 11:50:37.076749  577576 host.go:66] Checking if "pause-508007" exists ...
	I1206 11:50:37.077096  577576 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:50:37.145239  577576 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-06 11:50:37.135757331 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:50:37.145931  577576 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22032/minikube-v1.37.0-1764843329-22032-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1764843329-22032/minikube-v1.37.0-1764843329-22032-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1764843329-22032-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-508007 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1206 11:50:37.148766  577576 out.go:179] * Pausing node pause-508007 ... 
	I1206 11:50:37.152470  577576 host.go:66] Checking if "pause-508007" exists ...
	I1206 11:50:37.152826  577576 ssh_runner.go:195] Run: systemctl --version
	I1206 11:50:37.152885  577576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-508007
	I1206 11:50:37.171546  577576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/pause-508007/id_rsa Username:docker}
	I1206 11:50:37.278657  577576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 11:50:37.292155  577576 pause.go:52] kubelet running: true
	I1206 11:50:37.292252  577576 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1206 11:50:37.518947  577576 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1206 11:50:37.519049  577576 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1206 11:50:37.592202  577576 cri.go:89] found id: "133d8dbea499991081a35996bff0233170d2b1a868bf3857b507c1583c7a9ec1"
	I1206 11:50:37.592227  577576 cri.go:89] found id: "32544fef39c6ea0b6046c208ce2e51a0df4f3b80aa280b7c76f9760b3e50af4d"
	I1206 11:50:37.592232  577576 cri.go:89] found id: "2390e459c24305efc554a27a13a06262c95efaabcf624ba8fde250b8bad71a9a"
	I1206 11:50:37.592236  577576 cri.go:89] found id: "3090dcde82b5b2a345d93400577bfcb990c7d3e5129f6ec7e8a432e9ec254f4c"
	I1206 11:50:37.592239  577576 cri.go:89] found id: "aaae6a7e273cd079a387ceb0a651520860046fb1f5d6f1b68400e13685bcb58e"
	I1206 11:50:37.592243  577576 cri.go:89] found id: "29181bd0fdf81e45e507ff05c995486187c8a5460d27b3f5b331dbc17b03e19d"
	I1206 11:50:37.592246  577576 cri.go:89] found id: "ad2550eb9af565122164d80705b913cbc9bf4229a2e36150fead0da5caa0dbcd"
	I1206 11:50:37.592249  577576 cri.go:89] found id: "7d98153e0d77ef5e7104676cb3aa0afe2b6048a911a88464258274f081e58123"
	I1206 11:50:37.592252  577576 cri.go:89] found id: "4e43dc49fbc59be905132b7da10ac28f16fc2c8f552bd7b9c8f94927db5ca288"
	I1206 11:50:37.592259  577576 cri.go:89] found id: "e2d729fb666f2c3372fe7e8b35b9708de236469e7c191e9607f7590ca46e22a1"
	I1206 11:50:37.592262  577576 cri.go:89] found id: "80b9d15d2b9dbc9bd53e824fe457e3a98241cab7b0c1a75bc72b601c225e0a3e"
	I1206 11:50:37.592265  577576 cri.go:89] found id: "9947888e935f9906b6a8f63bf826008ea52d75f58d3993f4ddbe6b0e8d1b28bb"
	I1206 11:50:37.592268  577576 cri.go:89] found id: "102a55336c508b8d8bd4e02e9e13195ac8d952a43ccd1ad108693e4e8e3ddd88"
	I1206 11:50:37.592271  577576 cri.go:89] found id: "c1f11f7bb9c73c4b9f1bda5d2219c7bde9c1a3799ace49169aa65d33d5ef49d0"
	I1206 11:50:37.592274  577576 cri.go:89] found id: ""
	I1206 11:50:37.592324  577576 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 11:50:37.603296  577576 retry.go:31] will retry after 240.351448ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T11:50:37Z" level=error msg="open /run/runc: no such file or directory"
	I1206 11:50:37.844855  577576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 11:50:37.858101  577576 pause.go:52] kubelet running: false
	I1206 11:50:37.858175  577576 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1206 11:50:38.018923  577576 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1206 11:50:38.019114  577576 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1206 11:50:38.116743  577576 cri.go:89] found id: "133d8dbea499991081a35996bff0233170d2b1a868bf3857b507c1583c7a9ec1"
	I1206 11:50:38.116777  577576 cri.go:89] found id: "32544fef39c6ea0b6046c208ce2e51a0df4f3b80aa280b7c76f9760b3e50af4d"
	I1206 11:50:38.116782  577576 cri.go:89] found id: "2390e459c24305efc554a27a13a06262c95efaabcf624ba8fde250b8bad71a9a"
	I1206 11:50:38.116786  577576 cri.go:89] found id: "3090dcde82b5b2a345d93400577bfcb990c7d3e5129f6ec7e8a432e9ec254f4c"
	I1206 11:50:38.116789  577576 cri.go:89] found id: "aaae6a7e273cd079a387ceb0a651520860046fb1f5d6f1b68400e13685bcb58e"
	I1206 11:50:38.116792  577576 cri.go:89] found id: "29181bd0fdf81e45e507ff05c995486187c8a5460d27b3f5b331dbc17b03e19d"
	I1206 11:50:38.116796  577576 cri.go:89] found id: "ad2550eb9af565122164d80705b913cbc9bf4229a2e36150fead0da5caa0dbcd"
	I1206 11:50:38.116799  577576 cri.go:89] found id: "7d98153e0d77ef5e7104676cb3aa0afe2b6048a911a88464258274f081e58123"
	I1206 11:50:38.116802  577576 cri.go:89] found id: "4e43dc49fbc59be905132b7da10ac28f16fc2c8f552bd7b9c8f94927db5ca288"
	I1206 11:50:38.116809  577576 cri.go:89] found id: "e2d729fb666f2c3372fe7e8b35b9708de236469e7c191e9607f7590ca46e22a1"
	I1206 11:50:38.116813  577576 cri.go:89] found id: "80b9d15d2b9dbc9bd53e824fe457e3a98241cab7b0c1a75bc72b601c225e0a3e"
	I1206 11:50:38.116816  577576 cri.go:89] found id: "9947888e935f9906b6a8f63bf826008ea52d75f58d3993f4ddbe6b0e8d1b28bb"
	I1206 11:50:38.116820  577576 cri.go:89] found id: "102a55336c508b8d8bd4e02e9e13195ac8d952a43ccd1ad108693e4e8e3ddd88"
	I1206 11:50:38.116832  577576 cri.go:89] found id: "c1f11f7bb9c73c4b9f1bda5d2219c7bde9c1a3799ace49169aa65d33d5ef49d0"
	I1206 11:50:38.116842  577576 cri.go:89] found id: ""
	I1206 11:50:38.116895  577576 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 11:50:38.128980  577576 retry.go:31] will retry after 215.798841ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T11:50:38Z" level=error msg="open /run/runc: no such file or directory"
	I1206 11:50:38.345473  577576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 11:50:38.358722  577576 pause.go:52] kubelet running: false
	I1206 11:50:38.358790  577576 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1206 11:50:38.498018  577576 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1206 11:50:38.498118  577576 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1206 11:50:38.567674  577576 cri.go:89] found id: "133d8dbea499991081a35996bff0233170d2b1a868bf3857b507c1583c7a9ec1"
	I1206 11:50:38.567696  577576 cri.go:89] found id: "32544fef39c6ea0b6046c208ce2e51a0df4f3b80aa280b7c76f9760b3e50af4d"
	I1206 11:50:38.567701  577576 cri.go:89] found id: "2390e459c24305efc554a27a13a06262c95efaabcf624ba8fde250b8bad71a9a"
	I1206 11:50:38.567705  577576 cri.go:89] found id: "3090dcde82b5b2a345d93400577bfcb990c7d3e5129f6ec7e8a432e9ec254f4c"
	I1206 11:50:38.567708  577576 cri.go:89] found id: "aaae6a7e273cd079a387ceb0a651520860046fb1f5d6f1b68400e13685bcb58e"
	I1206 11:50:38.567712  577576 cri.go:89] found id: "29181bd0fdf81e45e507ff05c995486187c8a5460d27b3f5b331dbc17b03e19d"
	I1206 11:50:38.567715  577576 cri.go:89] found id: "ad2550eb9af565122164d80705b913cbc9bf4229a2e36150fead0da5caa0dbcd"
	I1206 11:50:38.567718  577576 cri.go:89] found id: "7d98153e0d77ef5e7104676cb3aa0afe2b6048a911a88464258274f081e58123"
	I1206 11:50:38.567721  577576 cri.go:89] found id: "4e43dc49fbc59be905132b7da10ac28f16fc2c8f552bd7b9c8f94927db5ca288"
	I1206 11:50:38.567727  577576 cri.go:89] found id: "e2d729fb666f2c3372fe7e8b35b9708de236469e7c191e9607f7590ca46e22a1"
	I1206 11:50:38.567730  577576 cri.go:89] found id: "80b9d15d2b9dbc9bd53e824fe457e3a98241cab7b0c1a75bc72b601c225e0a3e"
	I1206 11:50:38.567733  577576 cri.go:89] found id: "9947888e935f9906b6a8f63bf826008ea52d75f58d3993f4ddbe6b0e8d1b28bb"
	I1206 11:50:38.567736  577576 cri.go:89] found id: "102a55336c508b8d8bd4e02e9e13195ac8d952a43ccd1ad108693e4e8e3ddd88"
	I1206 11:50:38.567742  577576 cri.go:89] found id: "c1f11f7bb9c73c4b9f1bda5d2219c7bde9c1a3799ace49169aa65d33d5ef49d0"
	I1206 11:50:38.567745  577576 cri.go:89] found id: ""
	I1206 11:50:38.567800  577576 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 11:50:38.579748  577576 retry.go:31] will retry after 301.932601ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T11:50:38Z" level=error msg="open /run/runc: no such file or directory"
	I1206 11:50:38.882311  577576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 11:50:38.895427  577576 pause.go:52] kubelet running: false
	I1206 11:50:38.895547  577576 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1206 11:50:39.036310  577576 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1206 11:50:39.036429  577576 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1206 11:50:39.109627  577576 cri.go:89] found id: "133d8dbea499991081a35996bff0233170d2b1a868bf3857b507c1583c7a9ec1"
	I1206 11:50:39.109694  577576 cri.go:89] found id: "32544fef39c6ea0b6046c208ce2e51a0df4f3b80aa280b7c76f9760b3e50af4d"
	I1206 11:50:39.109713  577576 cri.go:89] found id: "2390e459c24305efc554a27a13a06262c95efaabcf624ba8fde250b8bad71a9a"
	I1206 11:50:39.109732  577576 cri.go:89] found id: "3090dcde82b5b2a345d93400577bfcb990c7d3e5129f6ec7e8a432e9ec254f4c"
	I1206 11:50:39.109751  577576 cri.go:89] found id: "aaae6a7e273cd079a387ceb0a651520860046fb1f5d6f1b68400e13685bcb58e"
	I1206 11:50:39.109771  577576 cri.go:89] found id: "29181bd0fdf81e45e507ff05c995486187c8a5460d27b3f5b331dbc17b03e19d"
	I1206 11:50:39.109799  577576 cri.go:89] found id: "ad2550eb9af565122164d80705b913cbc9bf4229a2e36150fead0da5caa0dbcd"
	I1206 11:50:39.109816  577576 cri.go:89] found id: "7d98153e0d77ef5e7104676cb3aa0afe2b6048a911a88464258274f081e58123"
	I1206 11:50:39.109826  577576 cri.go:89] found id: "4e43dc49fbc59be905132b7da10ac28f16fc2c8f552bd7b9c8f94927db5ca288"
	I1206 11:50:39.109874  577576 cri.go:89] found id: "e2d729fb666f2c3372fe7e8b35b9708de236469e7c191e9607f7590ca46e22a1"
	I1206 11:50:39.109882  577576 cri.go:89] found id: "80b9d15d2b9dbc9bd53e824fe457e3a98241cab7b0c1a75bc72b601c225e0a3e"
	I1206 11:50:39.109886  577576 cri.go:89] found id: "9947888e935f9906b6a8f63bf826008ea52d75f58d3993f4ddbe6b0e8d1b28bb"
	I1206 11:50:39.109890  577576 cri.go:89] found id: "102a55336c508b8d8bd4e02e9e13195ac8d952a43ccd1ad108693e4e8e3ddd88"
	I1206 11:50:39.109893  577576 cri.go:89] found id: "c1f11f7bb9c73c4b9f1bda5d2219c7bde9c1a3799ace49169aa65d33d5ef49d0"
	I1206 11:50:39.109897  577576 cri.go:89] found id: ""
	I1206 11:50:39.109948  577576 ssh_runner.go:195] Run: sudo runc list -f json
	I1206 11:50:39.124500  577576 out.go:203] 
	W1206 11:50:39.127597  577576 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T11:50:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T11:50:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1206 11:50:39.127620  577576 out.go:285] * 
	* 
	W1206 11:50:39.133311  577576 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1206 11:50:39.136418  577576 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-508007 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-508007
helpers_test.go:243: (dbg) docker inspect pause-508007:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "68ccb790666283eeb7175ee670d86f2a9bccd05a278e10c5b4be00fc58d84841",
	        "Created": "2025-12-06T11:48:51.430931899Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 573655,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T11:48:51.501402052Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/68ccb790666283eeb7175ee670d86f2a9bccd05a278e10c5b4be00fc58d84841/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/68ccb790666283eeb7175ee670d86f2a9bccd05a278e10c5b4be00fc58d84841/hostname",
	        "HostsPath": "/var/lib/docker/containers/68ccb790666283eeb7175ee670d86f2a9bccd05a278e10c5b4be00fc58d84841/hosts",
	        "LogPath": "/var/lib/docker/containers/68ccb790666283eeb7175ee670d86f2a9bccd05a278e10c5b4be00fc58d84841/68ccb790666283eeb7175ee670d86f2a9bccd05a278e10c5b4be00fc58d84841-json.log",
	        "Name": "/pause-508007",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-508007:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-508007",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "68ccb790666283eeb7175ee670d86f2a9bccd05a278e10c5b4be00fc58d84841",
	                "LowerDir": "/var/lib/docker/overlay2/04a5989ee425a972c21d0faf2404a913ad2f2bc60138f0d2137a201ad5d78598-init/diff:/var/lib/docker/overlay2/5011226d55616c9977b14c1fe617d1302fe59373df05ce8ec6e21b79143a1c57/diff",
	                "MergedDir": "/var/lib/docker/overlay2/04a5989ee425a972c21d0faf2404a913ad2f2bc60138f0d2137a201ad5d78598/merged",
	                "UpperDir": "/var/lib/docker/overlay2/04a5989ee425a972c21d0faf2404a913ad2f2bc60138f0d2137a201ad5d78598/diff",
	                "WorkDir": "/var/lib/docker/overlay2/04a5989ee425a972c21d0faf2404a913ad2f2bc60138f0d2137a201ad5d78598/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-508007",
	                "Source": "/var/lib/docker/volumes/pause-508007/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-508007",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-508007",
	                "name.minikube.sigs.k8s.io": "pause-508007",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ad8365941afaa5567caa29a2aa067b91e6586f3ac80adb18c90f1341473c7ea1",
	            "SandboxKey": "/var/run/docker/netns/ad8365941afa",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33403"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33404"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33407"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33405"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33406"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-508007": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e2:c6:43:3a:f1:12",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1cec166125ff16e07c4d7c1c69b082d93940ee8717545c2dbc2c08629c1de252",
	                    "EndpointID": "505d58bec1f8a1592eda24778b740c9fe01ff2ad5dad974f8736d2793ffe4573",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-508007",
	                        "68ccb7906662"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-508007 -n pause-508007
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-508007 -n pause-508007: exit status 2 (357.963171ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-508007 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-508007 logs -n 25: (1.422859758s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-365903 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                           │ NoKubernetes-365903       │ jenkins │ v1.37.0 │ 06 Dec 25 11:36 UTC │ 06 Dec 25 11:37 UTC │
	│ start   │ -p missing-upgrade-887720 --memory=3072 --driver=docker  --container-runtime=crio                                                               │ missing-upgrade-887720    │ jenkins │ v1.35.0 │ 06 Dec 25 11:36 UTC │ 06 Dec 25 11:37 UTC │
	│ start   │ -p NoKubernetes-365903 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-365903       │ jenkins │ v1.37.0 │ 06 Dec 25 11:37 UTC │ 06 Dec 25 11:37 UTC │
	│ start   │ -p missing-upgrade-887720 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ missing-upgrade-887720    │ jenkins │ v1.37.0 │ 06 Dec 25 11:37 UTC │ 06 Dec 25 11:38 UTC │
	│ delete  │ -p NoKubernetes-365903                                                                                                                          │ NoKubernetes-365903       │ jenkins │ v1.37.0 │ 06 Dec 25 11:37 UTC │ 06 Dec 25 11:37 UTC │
	│ start   │ -p NoKubernetes-365903 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-365903       │ jenkins │ v1.37.0 │ 06 Dec 25 11:37 UTC │ 06 Dec 25 11:37 UTC │
	│ ssh     │ -p NoKubernetes-365903 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-365903       │ jenkins │ v1.37.0 │ 06 Dec 25 11:37 UTC │                     │
	│ stop    │ -p NoKubernetes-365903                                                                                                                          │ NoKubernetes-365903       │ jenkins │ v1.37.0 │ 06 Dec 25 11:37 UTC │ 06 Dec 25 11:37 UTC │
	│ start   │ -p NoKubernetes-365903 --driver=docker  --container-runtime=crio                                                                                │ NoKubernetes-365903       │ jenkins │ v1.37.0 │ 06 Dec 25 11:37 UTC │ 06 Dec 25 11:38 UTC │
	│ ssh     │ -p NoKubernetes-365903 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-365903       │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │                     │
	│ delete  │ -p NoKubernetes-365903                                                                                                                          │ NoKubernetes-365903       │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ start   │ -p kubernetes-upgrade-432995 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio        │ kubernetes-upgrade-432995 │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ delete  │ -p missing-upgrade-887720                                                                                                                       │ missing-upgrade-887720    │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ start   │ -p stopped-upgrade-130351 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ stopped-upgrade-130351    │ jenkins │ v1.35.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:39 UTC │
	│ stop    │ -p kubernetes-upgrade-432995                                                                                                                    │ kubernetes-upgrade-432995 │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ start   │ -p kubernetes-upgrade-432995 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-432995 │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │                     │
	│ stop    │ stopped-upgrade-130351 stop                                                                                                                     │ stopped-upgrade-130351    │ jenkins │ v1.35.0 │ 06 Dec 25 11:39 UTC │ 06 Dec 25 11:39 UTC │
	│ start   │ -p stopped-upgrade-130351 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ stopped-upgrade-130351    │ jenkins │ v1.37.0 │ 06 Dec 25 11:39 UTC │ 06 Dec 25 11:43 UTC │
	│ delete  │ -p stopped-upgrade-130351                                                                                                                       │ stopped-upgrade-130351    │ jenkins │ v1.37.0 │ 06 Dec 25 11:43 UTC │ 06 Dec 25 11:43 UTC │
	│ start   │ -p running-upgrade-141321 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ running-upgrade-141321    │ jenkins │ v1.35.0 │ 06 Dec 25 11:43 UTC │ 06 Dec 25 11:44 UTC │
	│ start   │ -p running-upgrade-141321 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ running-upgrade-141321    │ jenkins │ v1.37.0 │ 06 Dec 25 11:44 UTC │ 06 Dec 25 11:48 UTC │
	│ delete  │ -p running-upgrade-141321                                                                                                                       │ running-upgrade-141321    │ jenkins │ v1.37.0 │ 06 Dec 25 11:48 UTC │ 06 Dec 25 11:48 UTC │
	│ start   │ -p pause-508007 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                       │ pause-508007              │ jenkins │ v1.37.0 │ 06 Dec 25 11:48 UTC │ 06 Dec 25 11:50 UTC │
	│ start   │ -p pause-508007 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                │ pause-508007              │ jenkins │ v1.37.0 │ 06 Dec 25 11:50 UTC │ 06 Dec 25 11:50 UTC │
	│ pause   │ -p pause-508007 --alsologtostderr -v=5                                                                                                          │ pause-508007              │ jenkins │ v1.37.0 │ 06 Dec 25 11:50 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 11:50:06
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 11:50:06.712534  576251 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:50:06.712800  576251 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:50:06.712837  576251 out.go:374] Setting ErrFile to fd 2...
	I1206 11:50:06.712862  576251 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:50:06.713254  576251 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 11:50:06.713739  576251 out.go:368] Setting JSON to false
	I1206 11:50:06.714786  576251 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":12758,"bootTime":1765009049,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 11:50:06.715080  576251 start.go:143] virtualization:  
	I1206 11:50:06.718200  576251 out.go:179] * [pause-508007] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 11:50:06.722124  576251 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 11:50:06.722410  576251 notify.go:221] Checking for updates...
	I1206 11:50:06.728357  576251 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 11:50:06.731121  576251 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 11:50:06.734062  576251 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-362985/.minikube
	I1206 11:50:06.736968  576251 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 11:50:06.739960  576251 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 11:50:06.743634  576251 config.go:182] Loaded profile config "pause-508007": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 11:50:06.744476  576251 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 11:50:06.778676  576251 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 11:50:06.778797  576251 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:50:06.846741  576251 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-06 11:50:06.837126033 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:50:06.846932  576251 docker.go:319] overlay module found
	I1206 11:50:06.850144  576251 out.go:179] * Using the docker driver based on existing profile
	I1206 11:50:06.853179  576251 start.go:309] selected driver: docker
	I1206 11:50:06.853203  576251 start.go:927] validating driver "docker" against &{Name:pause-508007 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-508007 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:50:06.853340  576251 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 11:50:06.853442  576251 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:50:06.907623  576251 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-06 11:50:06.897578497 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:50:06.908013  576251 cni.go:84] Creating CNI manager for ""
	I1206 11:50:06.908087  576251 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 11:50:06.908133  576251 start.go:353] cluster config:
	{Name:pause-508007 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-508007 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:50:06.913124  576251 out.go:179] * Starting "pause-508007" primary control-plane node in "pause-508007" cluster
	I1206 11:50:06.915986  576251 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 11:50:06.918915  576251 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 11:50:06.921760  576251 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 11:50:06.921821  576251 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1206 11:50:06.921836  576251 cache.go:65] Caching tarball of preloaded images
	I1206 11:50:06.921834  576251 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 11:50:06.921977  576251 preload.go:238] Found /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1206 11:50:06.921990  576251 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1206 11:50:06.922169  576251 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/pause-508007/config.json ...
	I1206 11:50:06.942575  576251 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 11:50:06.942598  576251 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 11:50:06.942616  576251 cache.go:243] Successfully downloaded all kic artifacts
	I1206 11:50:06.942646  576251 start.go:360] acquireMachinesLock for pause-508007: {Name:mk19bb9c866dfbc476292156df9e57150dcf3d95 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:50:06.942707  576251 start.go:364] duration metric: took 38.228µs to acquireMachinesLock for "pause-508007"
	I1206 11:50:06.942741  576251 start.go:96] Skipping create...Using existing machine configuration
	I1206 11:50:06.942750  576251 fix.go:54] fixHost starting: 
	I1206 11:50:06.943004  576251 cli_runner.go:164] Run: docker container inspect pause-508007 --format={{.State.Status}}
	I1206 11:50:06.959759  576251 fix.go:112] recreateIfNeeded on pause-508007: state=Running err=<nil>
	W1206 11:50:06.959792  576251 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 11:50:06.962969  576251 out.go:252] * Updating the running docker "pause-508007" container ...
	I1206 11:50:06.963029  576251 machine.go:94] provisionDockerMachine start ...
	I1206 11:50:06.963128  576251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-508007
	I1206 11:50:06.980652  576251 main.go:143] libmachine: Using SSH client type: native
	I1206 11:50:06.981002  576251 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1206 11:50:06.981017  576251 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 11:50:07.135542  576251 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-508007
	
	I1206 11:50:07.135577  576251 ubuntu.go:182] provisioning hostname "pause-508007"
	I1206 11:50:07.135648  576251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-508007
	I1206 11:50:07.154561  576251 main.go:143] libmachine: Using SSH client type: native
	I1206 11:50:07.154881  576251 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1206 11:50:07.154893  576251 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-508007 && echo "pause-508007" | sudo tee /etc/hostname
	I1206 11:50:07.317121  576251 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-508007
	
	I1206 11:50:07.317279  576251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-508007
	I1206 11:50:07.335276  576251 main.go:143] libmachine: Using SSH client type: native
	I1206 11:50:07.335632  576251 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1206 11:50:07.335648  576251 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-508007' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-508007/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-508007' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 11:50:07.491801  576251 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 11:50:07.491825  576251 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-362985/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-362985/.minikube}
	I1206 11:50:07.491857  576251 ubuntu.go:190] setting up certificates
	I1206 11:50:07.491867  576251 provision.go:84] configureAuth start
	I1206 11:50:07.491930  576251 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-508007
	I1206 11:50:07.509391  576251 provision.go:143] copyHostCerts
	I1206 11:50:07.509482  576251 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem, removing ...
	I1206 11:50:07.509503  576251 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem
	I1206 11:50:07.509578  576251 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem (1082 bytes)
	I1206 11:50:07.509680  576251 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem, removing ...
	I1206 11:50:07.509691  576251 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem
	I1206 11:50:07.509719  576251 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem (1123 bytes)
	I1206 11:50:07.509776  576251 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem, removing ...
	I1206 11:50:07.509786  576251 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem
	I1206 11:50:07.509810  576251 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem (1679 bytes)
	I1206 11:50:07.509863  576251 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem org=jenkins.pause-508007 san=[127.0.0.1 192.168.85.2 localhost minikube pause-508007]
	I1206 11:50:07.751466  576251 provision.go:177] copyRemoteCerts
	I1206 11:50:07.751567  576251 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 11:50:07.751625  576251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-508007
	I1206 11:50:07.769824  576251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/pause-508007/id_rsa Username:docker}
	I1206 11:50:07.875869  576251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1206 11:50:07.898578  576251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 11:50:07.917803  576251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 11:50:07.936775  576251 provision.go:87] duration metric: took 444.873288ms to configureAuth
	I1206 11:50:07.936915  576251 ubuntu.go:206] setting minikube options for container-runtime
	I1206 11:50:07.937241  576251 config.go:182] Loaded profile config "pause-508007": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 11:50:07.937431  576251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-508007
	I1206 11:50:07.955159  576251 main.go:143] libmachine: Using SSH client type: native
	I1206 11:50:07.955582  576251 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1206 11:50:07.955605  576251 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 11:50:13.361957  576251 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 11:50:13.361980  576251 machine.go:97] duration metric: took 6.398937713s to provisionDockerMachine
	I1206 11:50:13.361997  576251 start.go:293] postStartSetup for "pause-508007" (driver="docker")
	I1206 11:50:13.362044  576251 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 11:50:13.362213  576251 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 11:50:13.362257  576251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-508007
	I1206 11:50:13.383454  576251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/pause-508007/id_rsa Username:docker}
	I1206 11:50:13.491860  576251 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 11:50:13.495370  576251 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 11:50:13.495466  576251 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 11:50:13.495486  576251 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-362985/.minikube/addons for local assets ...
	I1206 11:50:13.495549  576251 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-362985/.minikube/files for local assets ...
	I1206 11:50:13.495656  576251 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem -> 3648552.pem in /etc/ssl/certs
	I1206 11:50:13.495770  576251 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 11:50:13.503737  576251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem --> /etc/ssl/certs/3648552.pem (1708 bytes)
	I1206 11:50:13.522677  576251 start.go:296] duration metric: took 160.642182ms for postStartSetup
	I1206 11:50:13.522762  576251 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 11:50:13.522844  576251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-508007
	I1206 11:50:13.541046  576251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/pause-508007/id_rsa Username:docker}
	I1206 11:50:13.645150  576251 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 11:50:13.650544  576251 fix.go:56] duration metric: took 6.707785738s for fixHost
	I1206 11:50:13.650573  576251 start.go:83] releasing machines lock for "pause-508007", held for 6.707853143s
	I1206 11:50:13.650645  576251 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-508007
	I1206 11:50:13.668458  576251 ssh_runner.go:195] Run: cat /version.json
	I1206 11:50:13.668524  576251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-508007
	I1206 11:50:13.668584  576251 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 11:50:13.668652  576251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-508007
	I1206 11:50:13.698338  576251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/pause-508007/id_rsa Username:docker}
	I1206 11:50:13.699339  576251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/pause-508007/id_rsa Username:docker}
	I1206 11:50:13.803149  576251 ssh_runner.go:195] Run: systemctl --version
	I1206 11:50:13.894900  576251 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 11:50:13.934127  576251 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 11:50:13.938713  576251 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 11:50:13.938804  576251 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 11:50:13.946843  576251 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 11:50:13.946867  576251 start.go:496] detecting cgroup driver to use...
	I1206 11:50:13.946918  576251 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 11:50:13.947000  576251 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 11:50:13.962958  576251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 11:50:13.979989  576251 docker.go:218] disabling cri-docker service (if available) ...
	I1206 11:50:13.980081  576251 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 11:50:13.998986  576251 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 11:50:14.016953  576251 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 11:50:14.171293  576251 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 11:50:14.311817  576251 docker.go:234] disabling docker service ...
	I1206 11:50:14.311888  576251 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 11:50:14.327915  576251 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 11:50:14.341794  576251 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 11:50:14.485429  576251 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 11:50:14.626120  576251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 11:50:14.639854  576251 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 11:50:14.655018  576251 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 11:50:14.655155  576251 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:50:14.665068  576251 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 11:50:14.665196  576251 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:50:14.674838  576251 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:50:14.684604  576251 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:50:14.694540  576251 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 11:50:14.703422  576251 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:50:14.713028  576251 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:50:14.722446  576251 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:50:14.740342  576251 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 11:50:14.748999  576251 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 11:50:14.756980  576251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:50:14.892489  576251 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 11:50:15.165957  576251 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 11:50:15.166040  576251 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 11:50:15.170319  576251 start.go:564] Will wait 60s for crictl version
	I1206 11:50:15.170392  576251 ssh_runner.go:195] Run: which crictl
	I1206 11:50:15.174266  576251 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 11:50:15.200892  576251 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 11:50:15.201054  576251 ssh_runner.go:195] Run: crio --version
	I1206 11:50:15.231830  576251 ssh_runner.go:195] Run: crio --version
	I1206 11:50:15.264266  576251 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1206 11:50:15.267268  576251 cli_runner.go:164] Run: docker network inspect pause-508007 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 11:50:15.283519  576251 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1206 11:50:15.287662  576251 kubeadm.go:884] updating cluster {Name:pause-508007 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-508007 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 11:50:15.287847  576251 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 11:50:15.287912  576251 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:50:15.323532  576251 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 11:50:15.323559  576251 crio.go:433] Images already preloaded, skipping extraction
	I1206 11:50:15.323621  576251 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:50:15.349196  576251 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 11:50:15.349218  576251 cache_images.go:86] Images are preloaded, skipping loading
	I1206 11:50:15.349226  576251 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1206 11:50:15.349322  576251 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-508007 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-508007 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 11:50:15.349404  576251 ssh_runner.go:195] Run: crio config
	I1206 11:50:15.416911  576251 cni.go:84] Creating CNI manager for ""
	I1206 11:50:15.416932  576251 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 11:50:15.416980  576251 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 11:50:15.417011  576251 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-508007 NodeName:pause-508007 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 11:50:15.417148  576251 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-508007"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 11:50:15.417223  576251 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 11:50:15.425213  576251 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 11:50:15.425287  576251 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 11:50:15.433145  576251 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1206 11:50:15.446966  576251 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 11:50:15.461034  576251 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1206 11:50:15.479238  576251 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1206 11:50:15.485738  576251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:50:15.634489  576251 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 11:50:15.648639  576251 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/pause-508007 for IP: 192.168.85.2
	I1206 11:50:15.648704  576251 certs.go:195] generating shared ca certs ...
	I1206 11:50:15.648744  576251 certs.go:227] acquiring lock for ca certs: {Name:mke2ec61a37b6f3abbcbeb9abd23d6a19d011dd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:50:15.648901  576251 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.key
	I1206 11:50:15.648961  576251 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.key
	I1206 11:50:15.648972  576251 certs.go:257] generating profile certs ...
	I1206 11:50:15.649057  576251 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/pause-508007/client.key
	I1206 11:50:15.649125  576251 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/pause-508007/apiserver.key.5feff3c6
	I1206 11:50:15.649170  576251 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/pause-508007/proxy-client.key
	I1206 11:50:15.649282  576251 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855.pem (1338 bytes)
	W1206 11:50:15.649318  576251 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855_empty.pem, impossibly tiny 0 bytes
	I1206 11:50:15.649331  576251 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 11:50:15.649382  576251 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem (1082 bytes)
	I1206 11:50:15.649413  576251 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem (1123 bytes)
	I1206 11:50:15.649440  576251 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem (1679 bytes)
	I1206 11:50:15.649492  576251 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem (1708 bytes)
	I1206 11:50:15.650160  576251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 11:50:15.670192  576251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 11:50:15.688645  576251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 11:50:15.709146  576251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 11:50:15.727282  576251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/pause-508007/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1206 11:50:15.745689  576251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/pause-508007/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 11:50:15.764271  576251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/pause-508007/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 11:50:15.781847  576251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/pause-508007/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 11:50:15.799538  576251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855.pem --> /usr/share/ca-certificates/364855.pem (1338 bytes)
	I1206 11:50:15.817024  576251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem --> /usr/share/ca-certificates/3648552.pem (1708 bytes)
	I1206 11:50:15.834721  576251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 11:50:15.853844  576251 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 11:50:15.872104  576251 ssh_runner.go:195] Run: openssl version
	I1206 11:50:15.879119  576251 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364855.pem
	I1206 11:50:15.890187  576251 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364855.pem /etc/ssl/certs/364855.pem
	I1206 11:50:15.899275  576251 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364855.pem
	I1206 11:50:15.906793  576251 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:36 /usr/share/ca-certificates/364855.pem
	I1206 11:50:15.906926  576251 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364855.pem
	I1206 11:50:15.957973  576251 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 11:50:15.967765  576251 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648552.pem
	I1206 11:50:15.986491  576251 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648552.pem /etc/ssl/certs/3648552.pem
	I1206 11:50:16.002237  576251 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648552.pem
	I1206 11:50:16.012077  576251 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:36 /usr/share/ca-certificates/3648552.pem
	I1206 11:50:16.012216  576251 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648552.pem
	I1206 11:50:16.090131  576251 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 11:50:16.116616  576251 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:50:16.135638  576251 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 11:50:16.161245  576251 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:50:16.171078  576251 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:26 /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:50:16.171208  576251 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:50:16.327506  576251 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 11:50:16.337536  576251 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 11:50:16.342588  576251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 11:50:16.400301  576251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 11:50:16.447822  576251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 11:50:16.502836  576251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 11:50:16.550468  576251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 11:50:16.601981  576251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 11:50:16.647114  576251 kubeadm.go:401] StartCluster: {Name:pause-508007 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-508007 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:50:16.647241  576251 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 11:50:16.647335  576251 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 11:50:16.705778  576251 cri.go:89] found id: "133d8dbea499991081a35996bff0233170d2b1a868bf3857b507c1583c7a9ec1"
	I1206 11:50:16.705800  576251 cri.go:89] found id: "32544fef39c6ea0b6046c208ce2e51a0df4f3b80aa280b7c76f9760b3e50af4d"
	I1206 11:50:16.705804  576251 cri.go:89] found id: "2390e459c24305efc554a27a13a06262c95efaabcf624ba8fde250b8bad71a9a"
	I1206 11:50:16.705808  576251 cri.go:89] found id: "3090dcde82b5b2a345d93400577bfcb990c7d3e5129f6ec7e8a432e9ec254f4c"
	I1206 11:50:16.705812  576251 cri.go:89] found id: "aaae6a7e273cd079a387ceb0a651520860046fb1f5d6f1b68400e13685bcb58e"
	I1206 11:50:16.705815  576251 cri.go:89] found id: "29181bd0fdf81e45e507ff05c995486187c8a5460d27b3f5b331dbc17b03e19d"
	I1206 11:50:16.705818  576251 cri.go:89] found id: "ad2550eb9af565122164d80705b913cbc9bf4229a2e36150fead0da5caa0dbcd"
	I1206 11:50:16.705822  576251 cri.go:89] found id: "7d98153e0d77ef5e7104676cb3aa0afe2b6048a911a88464258274f081e58123"
	I1206 11:50:16.705825  576251 cri.go:89] found id: "4e43dc49fbc59be905132b7da10ac28f16fc2c8f552bd7b9c8f94927db5ca288"
	I1206 11:50:16.705857  576251 cri.go:89] found id: "e2d729fb666f2c3372fe7e8b35b9708de236469e7c191e9607f7590ca46e22a1"
	I1206 11:50:16.705867  576251 cri.go:89] found id: "80b9d15d2b9dbc9bd53e824fe457e3a98241cab7b0c1a75bc72b601c225e0a3e"
	I1206 11:50:16.705884  576251 cri.go:89] found id: "9947888e935f9906b6a8f63bf826008ea52d75f58d3993f4ddbe6b0e8d1b28bb"
	I1206 11:50:16.705894  576251 cri.go:89] found id: "102a55336c508b8d8bd4e02e9e13195ac8d952a43ccd1ad108693e4e8e3ddd88"
	I1206 11:50:16.705898  576251 cri.go:89] found id: "c1f11f7bb9c73c4b9f1bda5d2219c7bde9c1a3799ace49169aa65d33d5ef49d0"
	I1206 11:50:16.705902  576251 cri.go:89] found id: ""
	I1206 11:50:16.705967  576251 ssh_runner.go:195] Run: sudo runc list -f json
	W1206 11:50:16.726641  576251 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T11:50:16Z" level=error msg="open /run/runc: no such file or directory"
	I1206 11:50:16.726784  576251 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 11:50:16.744665  576251 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 11:50:16.744727  576251 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 11:50:16.744804  576251 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 11:50:16.757576  576251 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 11:50:16.758334  576251 kubeconfig.go:125] found "pause-508007" server: "https://192.168.85.2:8443"
	I1206 11:50:16.759282  576251 kapi.go:59] client config for pause-508007: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22047-362985/.minikube/profiles/pause-508007/client.crt", KeyFile:"/home/jenkins/minikube-integration/22047-362985/.minikube/profiles/pause-508007/client.key", CAFile:"/home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 11:50:16.760115  576251 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1206 11:50:16.760176  576251 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1206 11:50:16.760197  576251 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1206 11:50:16.760215  576251 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1206 11:50:16.760245  576251 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1206 11:50:16.760598  576251 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 11:50:16.770560  576251 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1206 11:50:16.770643  576251 kubeadm.go:602] duration metric: took 25.887874ms to restartPrimaryControlPlane
	I1206 11:50:16.770667  576251 kubeadm.go:403] duration metric: took 123.564692ms to StartCluster
	I1206 11:50:16.770706  576251 settings.go:142] acquiring lock: {Name:mk789e01bfd4ab9fa1e2a8415fa99b570b26926a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:50:16.770801  576251 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 11:50:16.771776  576251 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/kubeconfig: {Name:mk779651834cfbdc6f0b5e8f5a9abc0f05106181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:50:16.772086  576251 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 11:50:16.772741  576251 config.go:182] Loaded profile config "pause-508007": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 11:50:16.772704  576251 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 11:50:16.775275  576251 out.go:179] * Verifying Kubernetes components...
	I1206 11:50:16.775388  576251 out.go:179] * Enabled addons: 
	I1206 11:50:16.778195  576251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:50:16.778327  576251 addons.go:530] duration metric: took 5.633081ms for enable addons: enabled=[]
	I1206 11:50:17.087086  576251 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 11:50:17.106560  576251 node_ready.go:35] waiting up to 6m0s for node "pause-508007" to be "Ready" ...
	I1206 11:50:20.615307  576251 node_ready.go:49] node "pause-508007" is "Ready"
	I1206 11:50:20.615338  576251 node_ready.go:38] duration metric: took 3.50873829s for node "pause-508007" to be "Ready" ...
	I1206 11:50:20.615352  576251 api_server.go:52] waiting for apiserver process to appear ...
	I1206 11:50:20.615445  576251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:50:20.633944  576251 api_server.go:72] duration metric: took 3.861784957s to wait for apiserver process to appear ...
	I1206 11:50:20.633971  576251 api_server.go:88] waiting for apiserver healthz status ...
	I1206 11:50:20.633992  576251 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:50:20.681481  576251 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 11:50:20.681511  576251 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 11:50:21.134087  576251 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:50:21.144212  576251 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1206 11:50:21.144294  576251 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1206 11:50:21.635030  576251 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:50:21.645107  576251 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1206 11:50:21.645194  576251 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1206 11:50:22.135116  576251 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:50:22.145456  576251 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1206 11:50:22.146712  576251 api_server.go:141] control plane version: v1.34.2
	I1206 11:50:22.146749  576251 api_server.go:131] duration metric: took 1.512761104s to wait for apiserver health ...
	I1206 11:50:22.146776  576251 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 11:50:22.150073  576251 system_pods.go:59] 7 kube-system pods found
	I1206 11:50:22.150121  576251 system_pods.go:61] "coredns-66bc5c9577-94krv" [cb647f7b-cb31-4d99-9254-82bfdce366fc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 11:50:22.150130  576251 system_pods.go:61] "etcd-pause-508007" [7488d950-14a7-4590-9fd6-4510bcfe6034] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 11:50:22.150136  576251 system_pods.go:61] "kindnet-9zw56" [407010ad-d437-4e90-bbdc-f9eeb5479739] Running
	I1206 11:50:22.150142  576251 system_pods.go:61] "kube-apiserver-pause-508007" [3d567c62-c73b-4810-b13b-2ee29df3a862] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 11:50:22.150149  576251 system_pods.go:61] "kube-controller-manager-pause-508007" [c8b8d23a-135b-40a9-9fa9-036e8f53f330] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 11:50:22.150158  576251 system_pods.go:61] "kube-proxy-dn8b7" [9b63dc69-331f-47e6-b0bb-f21401e05ff6] Running
	I1206 11:50:22.150164  576251 system_pods.go:61] "kube-scheduler-pause-508007" [2608d0ee-fbb1-4dc8-b227-3d375af1cb7d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 11:50:22.150170  576251 system_pods.go:74] duration metric: took 3.381066ms to wait for pod list to return data ...
	I1206 11:50:22.150187  576251 default_sa.go:34] waiting for default service account to be created ...
	I1206 11:50:22.152933  576251 default_sa.go:45] found service account: "default"
	I1206 11:50:22.152964  576251 default_sa.go:55] duration metric: took 2.77055ms for default service account to be created ...
	I1206 11:50:22.152984  576251 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 11:50:22.157234  576251 system_pods.go:86] 7 kube-system pods found
	I1206 11:50:22.157271  576251 system_pods.go:89] "coredns-66bc5c9577-94krv" [cb647f7b-cb31-4d99-9254-82bfdce366fc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 11:50:22.157281  576251 system_pods.go:89] "etcd-pause-508007" [7488d950-14a7-4590-9fd6-4510bcfe6034] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 11:50:22.157287  576251 system_pods.go:89] "kindnet-9zw56" [407010ad-d437-4e90-bbdc-f9eeb5479739] Running
	I1206 11:50:22.157293  576251 system_pods.go:89] "kube-apiserver-pause-508007" [3d567c62-c73b-4810-b13b-2ee29df3a862] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 11:50:22.157300  576251 system_pods.go:89] "kube-controller-manager-pause-508007" [c8b8d23a-135b-40a9-9fa9-036e8f53f330] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 11:50:22.157305  576251 system_pods.go:89] "kube-proxy-dn8b7" [9b63dc69-331f-47e6-b0bb-f21401e05ff6] Running
	I1206 11:50:22.157317  576251 system_pods.go:89] "kube-scheduler-pause-508007" [2608d0ee-fbb1-4dc8-b227-3d375af1cb7d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 11:50:22.157335  576251 system_pods.go:126] duration metric: took 4.343897ms to wait for k8s-apps to be running ...
	I1206 11:50:22.157343  576251 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 11:50:22.157402  576251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 11:50:22.170301  576251 system_svc.go:56] duration metric: took 12.9479ms WaitForService to wait for kubelet
	I1206 11:50:22.170330  576251 kubeadm.go:587] duration metric: took 5.398175496s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 11:50:22.170349  576251 node_conditions.go:102] verifying NodePressure condition ...
	I1206 11:50:22.173328  576251 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1206 11:50:22.173366  576251 node_conditions.go:123] node cpu capacity is 2
	I1206 11:50:22.173379  576251 node_conditions.go:105] duration metric: took 3.025683ms to run NodePressure ...
	I1206 11:50:22.173392  576251 start.go:242] waiting for startup goroutines ...
	I1206 11:50:22.173430  576251 start.go:247] waiting for cluster config update ...
	I1206 11:50:22.173446  576251 start.go:256] writing updated cluster config ...
	I1206 11:50:22.173774  576251 ssh_runner.go:195] Run: rm -f paused
	I1206 11:50:22.177489  576251 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 11:50:22.178251  576251 kapi.go:59] client config for pause-508007: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22047-362985/.minikube/profiles/pause-508007/client.crt", KeyFile:"/home/jenkins/minikube-integration/22047-362985/.minikube/profiles/pause-508007/client.key", CAFile:"/home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 11:50:22.181467  576251 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-94krv" in "kube-system" namespace to be "Ready" or be gone ...
	W1206 11:50:24.187279  576251 pod_ready.go:104] pod "coredns-66bc5c9577-94krv" is not "Ready", error: <nil>
	W1206 11:50:26.686726  576251 pod_ready.go:104] pod "coredns-66bc5c9577-94krv" is not "Ready", error: <nil>
	I1206 11:50:28.186934  576251 pod_ready.go:94] pod "coredns-66bc5c9577-94krv" is "Ready"
	I1206 11:50:28.186964  576251 pod_ready.go:86] duration metric: took 6.005469329s for pod "coredns-66bc5c9577-94krv" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:50:28.189587  576251 pod_ready.go:83] waiting for pod "etcd-pause-508007" in "kube-system" namespace to be "Ready" or be gone ...
	W1206 11:50:30.195679  576251 pod_ready.go:104] pod "etcd-pause-508007" is not "Ready", error: <nil>
	W1206 11:50:32.695686  576251 pod_ready.go:104] pod "etcd-pause-508007" is not "Ready", error: <nil>
	W1206 11:50:35.196461  576251 pod_ready.go:104] pod "etcd-pause-508007" is not "Ready", error: <nil>
	I1206 11:50:35.695338  576251 pod_ready.go:94] pod "etcd-pause-508007" is "Ready"
	I1206 11:50:35.695367  576251 pod_ready.go:86] duration metric: took 7.505751324s for pod "etcd-pause-508007" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:50:35.697872  576251 pod_ready.go:83] waiting for pod "kube-apiserver-pause-508007" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:50:35.702433  576251 pod_ready.go:94] pod "kube-apiserver-pause-508007" is "Ready"
	I1206 11:50:35.702464  576251 pod_ready.go:86] duration metric: took 4.565028ms for pod "kube-apiserver-pause-508007" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:50:35.705092  576251 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-508007" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:50:36.211125  576251 pod_ready.go:94] pod "kube-controller-manager-pause-508007" is "Ready"
	I1206 11:50:36.211155  576251 pod_ready.go:86] duration metric: took 506.037784ms for pod "kube-controller-manager-pause-508007" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:50:36.213709  576251 pod_ready.go:83] waiting for pod "kube-proxy-dn8b7" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:50:36.293482  576251 pod_ready.go:94] pod "kube-proxy-dn8b7" is "Ready"
	I1206 11:50:36.293510  576251 pod_ready.go:86] duration metric: took 79.775674ms for pod "kube-proxy-dn8b7" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:50:36.492577  576251 pod_ready.go:83] waiting for pod "kube-scheduler-pause-508007" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:50:36.892888  576251 pod_ready.go:94] pod "kube-scheduler-pause-508007" is "Ready"
	I1206 11:50:36.892919  576251 pod_ready.go:86] duration metric: took 400.311228ms for pod "kube-scheduler-pause-508007" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:50:36.892933  576251 pod_ready.go:40] duration metric: took 14.715399961s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 11:50:36.946925  576251 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1206 11:50:36.949991  576251 out.go:179] * Done! kubectl is now configured to use "pause-508007" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 06 11:50:16 pause-508007 crio[2088]: time="2025-12-06T11:50:16.179697504Z" level=info msg="Started container" PID=2328 containerID=aaae6a7e273cd079a387ceb0a651520860046fb1f5d6f1b68400e13685bcb58e description=kube-system/kube-proxy-dn8b7/kube-proxy id=f1a63cd2-c150-4d8c-911d-b4b92c3474cb name=/runtime.v1.RuntimeService/StartContainer sandboxID=ce409ee17bf60174c3fc91203b275e307707688270607f0ce6b53d41d507ca8c
	Dec 06 11:50:16 pause-508007 crio[2088]: time="2025-12-06T11:50:16.181365227Z" level=info msg="Created container 133d8dbea499991081a35996bff0233170d2b1a868bf3857b507c1583c7a9ec1: kube-system/etcd-pause-508007/etcd" id=9f666972-7ec8-40d1-8f6d-63080e97739b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 11:50:16 pause-508007 crio[2088]: time="2025-12-06T11:50:16.182206317Z" level=info msg="Starting container: 133d8dbea499991081a35996bff0233170d2b1a868bf3857b507c1583c7a9ec1" id=7fcecdc4-0dd5-47f8-a22d-76d5192e2503 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 11:50:16 pause-508007 crio[2088]: time="2025-12-06T11:50:16.184471748Z" level=info msg="Created container 2390e459c24305efc554a27a13a06262c95efaabcf624ba8fde250b8bad71a9a: kube-system/kube-controller-manager-pause-508007/kube-controller-manager" id=13de4451-65c9-4796-b70b-e2956f4d33c8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 11:50:16 pause-508007 crio[2088]: time="2025-12-06T11:50:16.184561865Z" level=info msg="Created container 3090dcde82b5b2a345d93400577bfcb990c7d3e5129f6ec7e8a432e9ec254f4c: kube-system/coredns-66bc5c9577-94krv/coredns" id=b74c89f5-f783-435c-a22a-6447e4f542c2 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 11:50:16 pause-508007 crio[2088]: time="2025-12-06T11:50:16.186839292Z" level=info msg="Created container 32544fef39c6ea0b6046c208ce2e51a0df4f3b80aa280b7c76f9760b3e50af4d: kube-system/kube-scheduler-pause-508007/kube-scheduler" id=178ac305-4972-48e3-969e-fd85660ff3df name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 11:50:16 pause-508007 crio[2088]: time="2025-12-06T11:50:16.187735685Z" level=info msg="Started container" PID=2358 containerID=133d8dbea499991081a35996bff0233170d2b1a868bf3857b507c1583c7a9ec1 description=kube-system/etcd-pause-508007/etcd id=7fcecdc4-0dd5-47f8-a22d-76d5192e2503 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cd876164c8a62f0985897b1b47210f978c2fbfea1dc80865fd266f061091e046
	Dec 06 11:50:16 pause-508007 crio[2088]: time="2025-12-06T11:50:16.188233781Z" level=info msg="Starting container: 2390e459c24305efc554a27a13a06262c95efaabcf624ba8fde250b8bad71a9a" id=7cc8abf0-49c2-4ef6-86cc-1df924ccefcf name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 11:50:16 pause-508007 crio[2088]: time="2025-12-06T11:50:16.195923881Z" level=info msg="Starting container: 32544fef39c6ea0b6046c208ce2e51a0df4f3b80aa280b7c76f9760b3e50af4d" id=0d541e1e-caf9-4504-91ef-e1d6b4eb2e0a name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 11:50:16 pause-508007 crio[2088]: time="2025-12-06T11:50:16.196122693Z" level=info msg="Starting container: 3090dcde82b5b2a345d93400577bfcb990c7d3e5129f6ec7e8a432e9ec254f4c" id=3a9ce770-7ebc-46ae-b694-f0dc09af184e name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 11:50:16 pause-508007 crio[2088]: time="2025-12-06T11:50:16.207842357Z" level=info msg="Started container" PID=2336 containerID=2390e459c24305efc554a27a13a06262c95efaabcf624ba8fde250b8bad71a9a description=kube-system/kube-controller-manager-pause-508007/kube-controller-manager id=7cc8abf0-49c2-4ef6-86cc-1df924ccefcf name=/runtime.v1.RuntimeService/StartContainer sandboxID=d6051cc0104001c01d60cbb9a07f055a443e4ab35eea18fbf12b59b917330579
	Dec 06 11:50:16 pause-508007 crio[2088]: time="2025-12-06T11:50:16.219925117Z" level=info msg="Started container" PID=2346 containerID=32544fef39c6ea0b6046c208ce2e51a0df4f3b80aa280b7c76f9760b3e50af4d description=kube-system/kube-scheduler-pause-508007/kube-scheduler id=0d541e1e-caf9-4504-91ef-e1d6b4eb2e0a name=/runtime.v1.RuntimeService/StartContainer sandboxID=83f52db18377aabbed472174907a8d372e8a2789ac5d5ebf0ba89973476a595e
	Dec 06 11:50:16 pause-508007 crio[2088]: time="2025-12-06T11:50:16.276257974Z" level=info msg="Started container" PID=2353 containerID=3090dcde82b5b2a345d93400577bfcb990c7d3e5129f6ec7e8a432e9ec254f4c description=kube-system/coredns-66bc5c9577-94krv/coredns id=3a9ce770-7ebc-46ae-b694-f0dc09af184e name=/runtime.v1.RuntimeService/StartContainer sandboxID=c17df71f20be6669ed130ce3e7b63e4e847d4e1429afec74caf2618fe5be4626
	Dec 06 11:50:26 pause-508007 crio[2088]: time="2025-12-06T11:50:26.434723293Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 06 11:50:26 pause-508007 crio[2088]: time="2025-12-06T11:50:26.439110842Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 06 11:50:26 pause-508007 crio[2088]: time="2025-12-06T11:50:26.439147814Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 06 11:50:26 pause-508007 crio[2088]: time="2025-12-06T11:50:26.439176352Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 06 11:50:26 pause-508007 crio[2088]: time="2025-12-06T11:50:26.442980519Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 06 11:50:26 pause-508007 crio[2088]: time="2025-12-06T11:50:26.443031777Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 06 11:50:26 pause-508007 crio[2088]: time="2025-12-06T11:50:26.443055384Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 06 11:50:26 pause-508007 crio[2088]: time="2025-12-06T11:50:26.446211956Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 06 11:50:26 pause-508007 crio[2088]: time="2025-12-06T11:50:26.446246352Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 06 11:50:26 pause-508007 crio[2088]: time="2025-12-06T11:50:26.446270533Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 06 11:50:26 pause-508007 crio[2088]: time="2025-12-06T11:50:26.449434916Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 06 11:50:26 pause-508007 crio[2088]: time="2025-12-06T11:50:26.449471266Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	133d8dbea4999       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   24 seconds ago       Running             etcd                      1                   cd876164c8a62       etcd-pause-508007                      kube-system
	32544fef39c6e       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   24 seconds ago       Running             kube-scheduler            1                   83f52db18377a       kube-scheduler-pause-508007            kube-system
	2390e459c2430       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   24 seconds ago       Running             kube-controller-manager   1                   d6051cc010400       kube-controller-manager-pause-508007   kube-system
	3090dcde82b5b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   24 seconds ago       Running             coredns                   1                   c17df71f20be6       coredns-66bc5c9577-94krv               kube-system
	aaae6a7e273cd       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   24 seconds ago       Running             kube-proxy                1                   ce409ee17bf60       kube-proxy-dn8b7                       kube-system
	29181bd0fdf81       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   24 seconds ago       Running             kindnet-cni               1                   24bfa60713736       kindnet-9zw56                          kube-system
	ad2550eb9af56       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   24 seconds ago       Running             kube-apiserver            1                   4a6975d5bdf80       kube-apiserver-pause-508007            kube-system
	7d98153e0d77e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   35 seconds ago       Exited              coredns                   0                   c17df71f20be6       coredns-66bc5c9577-94krv               kube-system
	4e43dc49fbc59       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   24bfa60713736       kindnet-9zw56                          kube-system
	e2d729fb666f2       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   About a minute ago   Exited              kube-proxy                0                   ce409ee17bf60       kube-proxy-dn8b7                       kube-system
	80b9d15d2b9db       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   About a minute ago   Exited              kube-scheduler            0                   83f52db18377a       kube-scheduler-pause-508007            kube-system
	9947888e935f9       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   About a minute ago   Exited              kube-controller-manager   0                   d6051cc010400       kube-controller-manager-pause-508007   kube-system
	102a55336c508       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   About a minute ago   Exited              kube-apiserver            0                   4a6975d5bdf80       kube-apiserver-pause-508007            kube-system
	c1f11f7bb9c73       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   About a minute ago   Exited              etcd                      0                   cd876164c8a62       etcd-pause-508007                      kube-system
	
	
	==> coredns [3090dcde82b5b2a345d93400577bfcb990c7d3e5129f6ec7e8a432e9ec254f4c] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58000 - 24814 "HINFO IN 2518647008278542066.4443455033724922931. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.0143583s
	
	
	==> coredns [7d98153e0d77ef5e7104676cb3aa0afe2b6048a911a88464258274f081e58123] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34852 - 22102 "HINFO IN 4574325125218278221.3142240415227874887. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.028101077s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-508007
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-508007
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=pause-508007
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T11_49_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 11:49:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-508007
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 11:50:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 11:50:04 +0000   Sat, 06 Dec 2025 11:49:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 11:50:04 +0000   Sat, 06 Dec 2025 11:49:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 11:50:04 +0000   Sat, 06 Dec 2025 11:49:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 11:50:04 +0000   Sat, 06 Dec 2025 11:50:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-508007
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 276ce0203b90767726fe164c6931608e
	  System UUID:                9470a48f-3d2c-45e6-a671-15b0e69330d8
	  Boot ID:                    b73b980d-8d6b-40e0-82fa-5c1b47c1eef7
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-94krv                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     77s
	  kube-system                 etcd-pause-508007                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         83s
	  kube-system                 kindnet-9zw56                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      78s
	  kube-system                 kube-apiserver-pause-508007             250m (12%)    0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 kube-controller-manager-pause-508007    200m (10%)    0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-proxy-dn8b7                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-scheduler-pause-508007             100m (5%)     0 (0%)      0 (0%)           0 (0%)         84s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 76s                kube-proxy       
	  Normal   Starting                 18s                kube-proxy       
	  Warning  CgroupV1                 90s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  89s (x8 over 89s)  kubelet          Node pause-508007 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    89s (x8 over 89s)  kubelet          Node pause-508007 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     89s (x8 over 89s)  kubelet          Node pause-508007 status is now: NodeHasSufficientPID
	  Normal   Starting                 83s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 83s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  83s                kubelet          Node pause-508007 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    83s                kubelet          Node pause-508007 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     83s                kubelet          Node pause-508007 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           78s                node-controller  Node pause-508007 event: Registered Node pause-508007 in Controller
	  Normal   NodeReady                36s                kubelet          Node pause-508007 status is now: NodeReady
	  Normal   RegisteredNode           17s                node-controller  Node pause-508007 event: Registered Node pause-508007 in Controller
	
	
	==> dmesg <==
	[  +3.991957] overlayfs: idmapped layers are currently not supported
	[ +35.228669] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:15] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:16] overlayfs: idmapped layers are currently not supported
	[  +4.168000] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:17] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:18] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:19] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:24] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:25] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:26] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:27] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:28] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:29] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:30] overlayfs: idmapped layers are currently not supported
	[  +6.342378] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:31] overlayfs: idmapped layers are currently not supported
	[ +25.558454] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:32] overlayfs: idmapped layers are currently not supported
	[ +27.925408] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:33] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:34] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:37] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:38] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:49] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [133d8dbea499991081a35996bff0233170d2b1a868bf3857b507c1583c7a9ec1] <==
	{"level":"warn","ts":"2025-12-06T11:50:19.018438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.055666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.072973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.095158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.122317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.139252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.149068Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.166069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.184596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.208433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.235050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.259707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.302050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.324376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.383514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.409511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.455912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.466585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.490854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.508319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.528231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.548710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.564946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.584811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.688050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53280","server-name":"","error":"EOF"}
	
	
	==> etcd [c1f11f7bb9c73c4b9f1bda5d2219c7bde9c1a3799ace49169aa65d33d5ef49d0] <==
	{"level":"warn","ts":"2025-12-06T11:49:14.178701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:49:14.195641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:49:14.218166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:49:14.245462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:49:14.258858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:49:14.287595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:49:14.337926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39608","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-06T11:50:08.143083Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-06T11:50:08.143144Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-508007","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-12-06T11:50:08.143235Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-06T11:50:08.303085Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-12-06T11:50:08.303241Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-06T11:50:08.303286Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-06T11:50:08.303294Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-12-06T11:50:08.303272Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-06T11:50:08.303346Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-06T11:50:08.303418Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-06T11:50:08.303464Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-06T11:50:08.303363Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-12-06T11:50:08.303548Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-06T11:50:08.303553Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-06T11:50:08.306990Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-12-06T11:50:08.307086Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-06T11:50:08.307121Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-06T11:50:08.307128Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-508007","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 11:50:40 up  3:33,  0 user,  load average: 1.67, 1.61, 1.82
	Linux pause-508007 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [29181bd0fdf81e45e507ff05c995486187c8a5460d27b3f5b331dbc17b03e19d] <==
	I1206 11:50:16.215355       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1206 11:50:16.226903       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1206 11:50:16.227054       1 main.go:148] setting mtu 1500 for CNI 
	I1206 11:50:16.227067       1 main.go:178] kindnetd IP family: "ipv4"
	I1206 11:50:16.227085       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-06T11:50:16Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1206 11:50:16.433993       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1206 11:50:16.434075       1 controller.go:381] "Waiting for informer caches to sync"
	I1206 11:50:16.434123       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1206 11:50:16.447248       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1206 11:50:20.614442       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1206 11:50:20.614515       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"networkpolicies\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1206 11:50:20.614590       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1206 11:50:20.614633       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1206 11:50:21.852091       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1206 11:50:21.852121       1 metrics.go:72] Registering metrics
	I1206 11:50:21.852170       1 controller.go:711] "Syncing nftables rules"
	I1206 11:50:26.434262       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1206 11:50:26.434322       1 main.go:301] handling current node
	I1206 11:50:36.434299       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1206 11:50:36.434335       1 main.go:301] handling current node
	
	
	==> kindnet [4e43dc49fbc59be905132b7da10ac28f16fc2c8f552bd7b9c8f94927db5ca288] <==
	I1206 11:49:23.514429       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1206 11:49:23.515407       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1206 11:49:23.515586       1 main.go:148] setting mtu 1500 for CNI 
	I1206 11:49:23.515626       1 main.go:178] kindnetd IP family: "ipv4"
	I1206 11:49:23.515661       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-06T11:49:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1206 11:49:23.714820       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1206 11:49:23.715571       1 controller.go:381] "Waiting for informer caches to sync"
	I1206 11:49:23.715636       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1206 11:49:23.715766       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1206 11:49:53.714884       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1206 11:49:53.715864       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1206 11:49:53.715874       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1206 11:49:53.715959       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1206 11:49:55.216092       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1206 11:49:55.216128       1 metrics.go:72] Registering metrics
	I1206 11:49:55.216194       1 controller.go:711] "Syncing nftables rules"
	I1206 11:50:03.720965       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1206 11:50:03.721098       1 main.go:301] handling current node
	
	
	==> kube-apiserver [102a55336c508b8d8bd4e02e9e13195ac8d952a43ccd1ad108693e4e8e3ddd88] <==
	W1206 11:50:08.169163       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.170783       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.170841       1 logging.go:55] [core] [Channel #12 SubChannel #14]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.170892       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.170935       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.170984       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.171028       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.171069       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.171112       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.171152       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.171203       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.171272       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.171488       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.171580       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.171648       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.172272       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.172366       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.172540       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.172755       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.172988       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.173034       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.173077       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.173119       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.173164       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.173201       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [ad2550eb9af565122164d80705b913cbc9bf4229a2e36150fead0da5caa0dbcd] <==
	I1206 11:50:20.650598       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1206 11:50:20.671349       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1206 11:50:20.677570       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1206 11:50:20.678376       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1206 11:50:20.715841       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1206 11:50:20.715880       1 policy_source.go:240] refreshing policies
	I1206 11:50:20.750772       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1206 11:50:20.750852       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1206 11:50:20.754500       1 cache.go:39] Caches are synced for autoregister controller
	I1206 11:50:20.765773       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 11:50:20.768751       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1206 11:50:20.769327       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1206 11:50:20.770258       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1206 11:50:20.776158       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1206 11:50:20.776214       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1206 11:50:20.776512       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1206 11:50:20.795946       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 11:50:20.814072       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1206 11:50:20.824362       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1206 11:50:21.465052       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1206 11:50:22.377015       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 11:50:23.834163       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1206 11:50:24.032750       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 11:50:24.082621       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 11:50:24.136184       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [2390e459c24305efc554a27a13a06262c95efaabcf624ba8fde250b8bad71a9a] <==
	I1206 11:50:23.742585       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1206 11:50:23.750879       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1206 11:50:23.750984       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1206 11:50:23.754181       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1206 11:50:23.754339       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1206 11:50:23.756981       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1206 11:50:23.757110       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1206 11:50:23.764965       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1206 11:50:23.765106       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 11:50:23.765143       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1206 11:50:23.765173       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1206 11:50:23.769515       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 11:50:23.769618       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1206 11:50:23.777383       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1206 11:50:23.779475       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1206 11:50:23.779495       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1206 11:50:23.779520       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1206 11:50:23.783990       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1206 11:50:23.786886       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 11:50:23.793665       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1206 11:50:23.793800       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1206 11:50:23.793847       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1206 11:50:23.793874       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1206 11:50:23.793901       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1206 11:50:23.801200       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	
	
	==> kube-controller-manager [9947888e935f9906b6a8f63bf826008ea52d75f58d3993f4ddbe6b0e8d1b28bb] <==
	I1206 11:49:22.053512       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1206 11:49:22.057902       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 11:49:22.067287       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1206 11:49:22.073070       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1206 11:49:22.074324       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1206 11:49:22.075585       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1206 11:49:22.075607       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1206 11:49:22.076131       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1206 11:49:22.077054       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1206 11:49:22.077060       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1206 11:49:22.077379       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1206 11:49:22.077473       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1206 11:49:22.077515       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1206 11:49:22.077678       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 11:49:22.077706       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1206 11:49:22.077715       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1206 11:49:22.077858       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1206 11:49:22.082604       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1206 11:49:22.082769       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1206 11:49:22.082893       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1206 11:49:22.082930       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1206 11:49:22.082961       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1206 11:49:22.095289       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-508007" podCIDRs=["10.244.0.0/24"]
	I1206 11:49:22.102748       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 11:50:07.034969       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [aaae6a7e273cd079a387ceb0a651520860046fb1f5d6f1b68400e13685bcb58e] <==
	I1206 11:50:17.139224       1 server_linux.go:53] "Using iptables proxy"
	I1206 11:50:19.208452       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1206 11:50:20.795764       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"pause-508007\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1206 11:50:22.309495       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 11:50:22.309547       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1206 11:50:22.309626       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 11:50:22.350104       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 11:50:22.350219       1 server_linux.go:132] "Using iptables Proxier"
	I1206 11:50:22.355261       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 11:50:22.355594       1 server.go:527] "Version info" version="v1.34.2"
	I1206 11:50:22.355781       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 11:50:22.357218       1 config.go:200] "Starting service config controller"
	I1206 11:50:22.357286       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 11:50:22.357329       1 config.go:106] "Starting endpoint slice config controller"
	I1206 11:50:22.357369       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 11:50:22.357414       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 11:50:22.357442       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 11:50:22.358094       1 config.go:309] "Starting node config controller"
	I1206 11:50:22.358144       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 11:50:22.358174       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 11:50:22.457719       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 11:50:22.457813       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 11:50:22.457840       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [e2d729fb666f2c3372fe7e8b35b9708de236469e7c191e9607f7590ca46e22a1] <==
	I1206 11:49:23.441920       1 server_linux.go:53] "Using iptables proxy"
	I1206 11:49:23.529125       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 11:49:23.629966       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 11:49:23.630006       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1206 11:49:23.630090       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 11:49:23.648605       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 11:49:23.648665       1 server_linux.go:132] "Using iptables Proxier"
	I1206 11:49:23.652513       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 11:49:23.653287       1 server.go:527] "Version info" version="v1.34.2"
	I1206 11:49:23.653335       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 11:49:23.656355       1 config.go:200] "Starting service config controller"
	I1206 11:49:23.656479       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 11:49:23.656527       1 config.go:106] "Starting endpoint slice config controller"
	I1206 11:49:23.656555       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 11:49:23.656593       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 11:49:23.656623       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 11:49:23.658609       1 config.go:309] "Starting node config controller"
	I1206 11:49:23.658694       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 11:49:23.658726       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 11:49:23.756878       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 11:49:23.756966       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 11:49:23.756846       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [32544fef39c6ea0b6046c208ce2e51a0df4f3b80aa280b7c76f9760b3e50af4d] <==
	I1206 11:50:20.656127       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 11:50:20.658533       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1206 11:50:20.670070       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1206 11:50:20.670148       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 11:50:20.670394       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1206 11:50:20.692578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1206 11:50:20.707716       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 11:50:20.707866       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 11:50:20.707968       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 11:50:20.708065       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 11:50:20.708184       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1206 11:50:20.708259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 11:50:20.708339       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1206 11:50:20.708423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 11:50:20.708541       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 11:50:20.708649       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 11:50:20.708696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1206 11:50:20.708741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 11:50:20.708800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 11:50:20.708835       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 11:50:20.708875       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 11:50:20.711746       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 11:50:20.711881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1206 11:50:20.711928       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1206 11:50:21.671357       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [80b9d15d2b9dbc9bd53e824fe457e3a98241cab7b0c1a75bc72b601c225e0a3e] <==
	E1206 11:49:15.117953       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 11:49:15.118013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 11:49:15.118533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1206 11:49:15.118600       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1206 11:49:15.120088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 11:49:15.969553       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1206 11:49:16.013662       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 11:49:16.118001       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 11:49:16.141596       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 11:49:16.155303       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 11:49:16.191125       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1206 11:49:16.208551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 11:49:16.343239       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 11:49:16.363687       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1206 11:49:16.370321       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 11:49:16.398612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 11:49:16.417477       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 11:49:16.433413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1206 11:49:18.602126       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 11:50:08.144415       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1206 11:50:08.144453       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1206 11:50:08.144479       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1206 11:50:08.144509       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 11:50:08.144777       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1206 11:50:08.144808       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 06 11:50:15 pause-508007 kubelet[1331]: E1206 11:50:15.941149    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-9zw56\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="407010ad-d437-4e90-bbdc-f9eeb5479739" pod="kube-system/kindnet-9zw56"
	Dec 06 11:50:15 pause-508007 kubelet[1331]: I1206 11:50:15.958669    1331 scope.go:117] "RemoveContainer" containerID="c1f11f7bb9c73c4b9f1bda5d2219c7bde9c1a3799ace49169aa65d33d5ef49d0"
	Dec 06 11:50:15 pause-508007 kubelet[1331]: E1206 11:50:15.959512    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-9zw56\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="407010ad-d437-4e90-bbdc-f9eeb5479739" pod="kube-system/kindnet-9zw56"
	Dec 06 11:50:15 pause-508007 kubelet[1331]: E1206 11:50:15.959821    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-94krv\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="cb647f7b-cb31-4d99-9254-82bfdce366fc" pod="kube-system/coredns-66bc5c9577-94krv"
	Dec 06 11:50:15 pause-508007 kubelet[1331]: E1206 11:50:15.960020    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-508007\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="3529c3ad39a1bd2ef07faa509a1790cc" pod="kube-system/kube-scheduler-pause-508007"
	Dec 06 11:50:15 pause-508007 kubelet[1331]: E1206 11:50:15.960441    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-508007\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="3f952fbcb2ee2eb41910ebc976cbed46" pod="kube-system/etcd-pause-508007"
	Dec 06 11:50:15 pause-508007 kubelet[1331]: E1206 11:50:15.960734    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-508007\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="81b9e1c9a2f1233c6200feb8279d93d7" pod="kube-system/kube-apiserver-pause-508007"
	Dec 06 11:50:15 pause-508007 kubelet[1331]: E1206 11:50:15.961057    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-508007\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="9ae96daea84f9f9533e7623250412599" pod="kube-system/kube-controller-manager-pause-508007"
	Dec 06 11:50:15 pause-508007 kubelet[1331]: E1206 11:50:15.961417    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dn8b7\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="9b63dc69-331f-47e6-b0bb-f21401e05ff6" pod="kube-system/kube-proxy-dn8b7"
	Dec 06 11:50:20 pause-508007 kubelet[1331]: E1206 11:50:20.580170    1331 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-508007\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-508007' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Dec 06 11:50:20 pause-508007 kubelet[1331]: E1206 11:50:20.580317    1331 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-dn8b7\" is forbidden: User \"system:node:pause-508007\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-508007' and this object" podUID="9b63dc69-331f-47e6-b0bb-f21401e05ff6" pod="kube-system/kube-proxy-dn8b7"
	Dec 06 11:50:20 pause-508007 kubelet[1331]: E1206 11:50:20.581119    1331 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-508007\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-508007' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Dec 06 11:50:20 pause-508007 kubelet[1331]: E1206 11:50:20.581165    1331 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-508007\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-508007' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Dec 06 11:50:20 pause-508007 kubelet[1331]: E1206 11:50:20.602088    1331 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-9zw56\" is forbidden: User \"system:node:pause-508007\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-508007' and this object" podUID="407010ad-d437-4e90-bbdc-f9eeb5479739" pod="kube-system/kindnet-9zw56"
	Dec 06 11:50:20 pause-508007 kubelet[1331]: E1206 11:50:20.630545    1331 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-94krv\" is forbidden: User \"system:node:pause-508007\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-508007' and this object" podUID="cb647f7b-cb31-4d99-9254-82bfdce366fc" pod="kube-system/coredns-66bc5c9577-94krv"
	Dec 06 11:50:20 pause-508007 kubelet[1331]: E1206 11:50:20.651972    1331 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-508007\" is forbidden: User \"system:node:pause-508007\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-508007' and this object" podUID="3529c3ad39a1bd2ef07faa509a1790cc" pod="kube-system/kube-scheduler-pause-508007"
	Dec 06 11:50:20 pause-508007 kubelet[1331]: E1206 11:50:20.684594    1331 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-508007\" is forbidden: User \"system:node:pause-508007\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-508007' and this object" podUID="3f952fbcb2ee2eb41910ebc976cbed46" pod="kube-system/etcd-pause-508007"
	Dec 06 11:50:20 pause-508007 kubelet[1331]: E1206 11:50:20.706730    1331 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-508007\" is forbidden: User \"system:node:pause-508007\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-508007' and this object" podUID="81b9e1c9a2f1233c6200feb8279d93d7" pod="kube-system/kube-apiserver-pause-508007"
	Dec 06 11:50:20 pause-508007 kubelet[1331]: E1206 11:50:20.714750    1331 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-508007\" is forbidden: User \"system:node:pause-508007\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-508007' and this object" podUID="9ae96daea84f9f9533e7623250412599" pod="kube-system/kube-controller-manager-pause-508007"
	Dec 06 11:50:20 pause-508007 kubelet[1331]: E1206 11:50:20.727618    1331 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-508007\" is forbidden: User \"system:node:pause-508007\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-508007' and this object" podUID="9ae96daea84f9f9533e7623250412599" pod="kube-system/kube-controller-manager-pause-508007"
	Dec 06 11:50:20 pause-508007 kubelet[1331]: E1206 11:50:20.735697    1331 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-dn8b7\" is forbidden: User \"system:node:pause-508007\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-508007' and this object" podUID="9b63dc69-331f-47e6-b0bb-f21401e05ff6" pod="kube-system/kube-proxy-dn8b7"
	Dec 06 11:50:20 pause-508007 kubelet[1331]: E1206 11:50:20.779041    1331 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-9zw56\" is forbidden: User \"system:node:pause-508007\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-508007' and this object" podUID="407010ad-d437-4e90-bbdc-f9eeb5479739" pod="kube-system/kindnet-9zw56"
	Dec 06 11:50:37 pause-508007 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 06 11:50:37 pause-508007 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 06 11:50:37 pause-508007 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-508007 -n pause-508007
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-508007 -n pause-508007: exit status 2 (369.416989ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-508007 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-508007
helpers_test.go:243: (dbg) docker inspect pause-508007:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "68ccb790666283eeb7175ee670d86f2a9bccd05a278e10c5b4be00fc58d84841",
	        "Created": "2025-12-06T11:48:51.430931899Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 573655,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-06T11:48:51.501402052Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:59cc51c6b356ccf2b0650e2edb6cad33b8da9ccfea870136f5f615109d6c846d",
	        "ResolvConfPath": "/var/lib/docker/containers/68ccb790666283eeb7175ee670d86f2a9bccd05a278e10c5b4be00fc58d84841/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/68ccb790666283eeb7175ee670d86f2a9bccd05a278e10c5b4be00fc58d84841/hostname",
	        "HostsPath": "/var/lib/docker/containers/68ccb790666283eeb7175ee670d86f2a9bccd05a278e10c5b4be00fc58d84841/hosts",
	        "LogPath": "/var/lib/docker/containers/68ccb790666283eeb7175ee670d86f2a9bccd05a278e10c5b4be00fc58d84841/68ccb790666283eeb7175ee670d86f2a9bccd05a278e10c5b4be00fc58d84841-json.log",
	        "Name": "/pause-508007",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-508007:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-508007",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "68ccb790666283eeb7175ee670d86f2a9bccd05a278e10c5b4be00fc58d84841",
	                "LowerDir": "/var/lib/docker/overlay2/04a5989ee425a972c21d0faf2404a913ad2f2bc60138f0d2137a201ad5d78598-init/diff:/var/lib/docker/overlay2/5011226d55616c9977b14c1fe617d1302fe59373df05ce8ec6e21b79143a1c57/diff",
	                "MergedDir": "/var/lib/docker/overlay2/04a5989ee425a972c21d0faf2404a913ad2f2bc60138f0d2137a201ad5d78598/merged",
	                "UpperDir": "/var/lib/docker/overlay2/04a5989ee425a972c21d0faf2404a913ad2f2bc60138f0d2137a201ad5d78598/diff",
	                "WorkDir": "/var/lib/docker/overlay2/04a5989ee425a972c21d0faf2404a913ad2f2bc60138f0d2137a201ad5d78598/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-508007",
	                "Source": "/var/lib/docker/volumes/pause-508007/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-508007",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-508007",
	                "name.minikube.sigs.k8s.io": "pause-508007",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ad8365941afaa5567caa29a2aa067b91e6586f3ac80adb18c90f1341473c7ea1",
	            "SandboxKey": "/var/run/docker/netns/ad8365941afa",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33403"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33404"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33407"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33405"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33406"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-508007": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e2:c6:43:3a:f1:12",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1cec166125ff16e07c4d7c1c69b082d93940ee8717545c2dbc2c08629c1de252",
	                    "EndpointID": "505d58bec1f8a1592eda24778b740c9fe01ff2ad5dad974f8736d2793ffe4573",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-508007",
	                        "68ccb7906662"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-508007 -n pause-508007
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-508007 -n pause-508007: exit status 2 (358.897145ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-508007 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-508007 logs -n 25: (1.433420419s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-365903 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                           │ NoKubernetes-365903       │ jenkins │ v1.37.0 │ 06 Dec 25 11:36 UTC │ 06 Dec 25 11:37 UTC │
	│ start   │ -p missing-upgrade-887720 --memory=3072 --driver=docker  --container-runtime=crio                                                               │ missing-upgrade-887720    │ jenkins │ v1.35.0 │ 06 Dec 25 11:36 UTC │ 06 Dec 25 11:37 UTC │
	│ start   │ -p NoKubernetes-365903 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-365903       │ jenkins │ v1.37.0 │ 06 Dec 25 11:37 UTC │ 06 Dec 25 11:37 UTC │
	│ start   │ -p missing-upgrade-887720 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ missing-upgrade-887720    │ jenkins │ v1.37.0 │ 06 Dec 25 11:37 UTC │ 06 Dec 25 11:38 UTC │
	│ delete  │ -p NoKubernetes-365903                                                                                                                          │ NoKubernetes-365903       │ jenkins │ v1.37.0 │ 06 Dec 25 11:37 UTC │ 06 Dec 25 11:37 UTC │
	│ start   │ -p NoKubernetes-365903 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-365903       │ jenkins │ v1.37.0 │ 06 Dec 25 11:37 UTC │ 06 Dec 25 11:37 UTC │
	│ ssh     │ -p NoKubernetes-365903 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-365903       │ jenkins │ v1.37.0 │ 06 Dec 25 11:37 UTC │                     │
	│ stop    │ -p NoKubernetes-365903                                                                                                                          │ NoKubernetes-365903       │ jenkins │ v1.37.0 │ 06 Dec 25 11:37 UTC │ 06 Dec 25 11:37 UTC │
	│ start   │ -p NoKubernetes-365903 --driver=docker  --container-runtime=crio                                                                                │ NoKubernetes-365903       │ jenkins │ v1.37.0 │ 06 Dec 25 11:37 UTC │ 06 Dec 25 11:38 UTC │
	│ ssh     │ -p NoKubernetes-365903 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-365903       │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │                     │
	│ delete  │ -p NoKubernetes-365903                                                                                                                          │ NoKubernetes-365903       │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ start   │ -p kubernetes-upgrade-432995 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio        │ kubernetes-upgrade-432995 │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ delete  │ -p missing-upgrade-887720                                                                                                                       │ missing-upgrade-887720    │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ start   │ -p stopped-upgrade-130351 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ stopped-upgrade-130351    │ jenkins │ v1.35.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:39 UTC │
	│ stop    │ -p kubernetes-upgrade-432995                                                                                                                    │ kubernetes-upgrade-432995 │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │ 06 Dec 25 11:38 UTC │
	│ start   │ -p kubernetes-upgrade-432995 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-432995 │ jenkins │ v1.37.0 │ 06 Dec 25 11:38 UTC │                     │
	│ stop    │ stopped-upgrade-130351 stop                                                                                                                     │ stopped-upgrade-130351    │ jenkins │ v1.35.0 │ 06 Dec 25 11:39 UTC │ 06 Dec 25 11:39 UTC │
	│ start   │ -p stopped-upgrade-130351 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ stopped-upgrade-130351    │ jenkins │ v1.37.0 │ 06 Dec 25 11:39 UTC │ 06 Dec 25 11:43 UTC │
	│ delete  │ -p stopped-upgrade-130351                                                                                                                       │ stopped-upgrade-130351    │ jenkins │ v1.37.0 │ 06 Dec 25 11:43 UTC │ 06 Dec 25 11:43 UTC │
	│ start   │ -p running-upgrade-141321 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ running-upgrade-141321    │ jenkins │ v1.35.0 │ 06 Dec 25 11:43 UTC │ 06 Dec 25 11:44 UTC │
	│ start   │ -p running-upgrade-141321 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ running-upgrade-141321    │ jenkins │ v1.37.0 │ 06 Dec 25 11:44 UTC │ 06 Dec 25 11:48 UTC │
	│ delete  │ -p running-upgrade-141321                                                                                                                       │ running-upgrade-141321    │ jenkins │ v1.37.0 │ 06 Dec 25 11:48 UTC │ 06 Dec 25 11:48 UTC │
	│ start   │ -p pause-508007 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                       │ pause-508007              │ jenkins │ v1.37.0 │ 06 Dec 25 11:48 UTC │ 06 Dec 25 11:50 UTC │
	│ start   │ -p pause-508007 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                │ pause-508007              │ jenkins │ v1.37.0 │ 06 Dec 25 11:50 UTC │ 06 Dec 25 11:50 UTC │
	│ pause   │ -p pause-508007 --alsologtostderr -v=5                                                                                                          │ pause-508007              │ jenkins │ v1.37.0 │ 06 Dec 25 11:50 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 11:50:06
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 11:50:06.712534  576251 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:50:06.712800  576251 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:50:06.712837  576251 out.go:374] Setting ErrFile to fd 2...
	I1206 11:50:06.712862  576251 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:50:06.713254  576251 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 11:50:06.713739  576251 out.go:368] Setting JSON to false
	I1206 11:50:06.714786  576251 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":12758,"bootTime":1765009049,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 11:50:06.715080  576251 start.go:143] virtualization:  
	I1206 11:50:06.718200  576251 out.go:179] * [pause-508007] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 11:50:06.722124  576251 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 11:50:06.722410  576251 notify.go:221] Checking for updates...
	I1206 11:50:06.728357  576251 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 11:50:06.731121  576251 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 11:50:06.734062  576251 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-362985/.minikube
	I1206 11:50:06.736968  576251 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 11:50:06.739960  576251 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 11:50:06.743634  576251 config.go:182] Loaded profile config "pause-508007": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 11:50:06.744476  576251 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 11:50:06.778676  576251 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 11:50:06.778797  576251 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:50:06.846741  576251 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-06 11:50:06.837126033 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:50:06.846932  576251 docker.go:319] overlay module found
	I1206 11:50:06.850144  576251 out.go:179] * Using the docker driver based on existing profile
	I1206 11:50:06.853179  576251 start.go:309] selected driver: docker
	I1206 11:50:06.853203  576251 start.go:927] validating driver "docker" against &{Name:pause-508007 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-508007 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:50:06.853340  576251 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 11:50:06.853442  576251 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:50:06.907623  576251 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-06 11:50:06.897578497 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:50:06.908013  576251 cni.go:84] Creating CNI manager for ""
	I1206 11:50:06.908087  576251 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 11:50:06.908133  576251 start.go:353] cluster config:
	{Name:pause-508007 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-508007 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:50:06.913124  576251 out.go:179] * Starting "pause-508007" primary control-plane node in "pause-508007" cluster
	I1206 11:50:06.915986  576251 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 11:50:06.918915  576251 out.go:179] * Pulling base image v0.0.48-1764843390-22032 ...
	I1206 11:50:06.921760  576251 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 11:50:06.921821  576251 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1206 11:50:06.921836  576251 cache.go:65] Caching tarball of preloaded images
	I1206 11:50:06.921834  576251 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 11:50:06.921977  576251 preload.go:238] Found /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1206 11:50:06.921990  576251 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1206 11:50:06.922169  576251 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/pause-508007/config.json ...
	I1206 11:50:06.942575  576251 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon, skipping pull
	I1206 11:50:06.942598  576251 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 exists in daemon, skipping load
	I1206 11:50:06.942616  576251 cache.go:243] Successfully downloaded all kic artifacts
	I1206 11:50:06.942646  576251 start.go:360] acquireMachinesLock for pause-508007: {Name:mk19bb9c866dfbc476292156df9e57150dcf3d95 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1206 11:50:06.942707  576251 start.go:364] duration metric: took 38.228µs to acquireMachinesLock for "pause-508007"
	I1206 11:50:06.942741  576251 start.go:96] Skipping create...Using existing machine configuration
	I1206 11:50:06.942750  576251 fix.go:54] fixHost starting: 
	I1206 11:50:06.943004  576251 cli_runner.go:164] Run: docker container inspect pause-508007 --format={{.State.Status}}
	I1206 11:50:06.959759  576251 fix.go:112] recreateIfNeeded on pause-508007: state=Running err=<nil>
	W1206 11:50:06.959792  576251 fix.go:138] unexpected machine state, will restart: <nil>
	I1206 11:50:06.962969  576251 out.go:252] * Updating the running docker "pause-508007" container ...
	I1206 11:50:06.963029  576251 machine.go:94] provisionDockerMachine start ...
	I1206 11:50:06.963128  576251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-508007
	I1206 11:50:06.980652  576251 main.go:143] libmachine: Using SSH client type: native
	I1206 11:50:06.981002  576251 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1206 11:50:06.981017  576251 main.go:143] libmachine: About to run SSH command:
	hostname
	I1206 11:50:07.135542  576251 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-508007
	
	I1206 11:50:07.135577  576251 ubuntu.go:182] provisioning hostname "pause-508007"
	I1206 11:50:07.135648  576251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-508007
	I1206 11:50:07.154561  576251 main.go:143] libmachine: Using SSH client type: native
	I1206 11:50:07.154881  576251 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1206 11:50:07.154893  576251 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-508007 && echo "pause-508007" | sudo tee /etc/hostname
	I1206 11:50:07.317121  576251 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-508007
	
	I1206 11:50:07.317279  576251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-508007
	I1206 11:50:07.335276  576251 main.go:143] libmachine: Using SSH client type: native
	I1206 11:50:07.335632  576251 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1206 11:50:07.335648  576251 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-508007' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-508007/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-508007' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1206 11:50:07.491801  576251 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1206 11:50:07.491825  576251 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22047-362985/.minikube CaCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22047-362985/.minikube}
	I1206 11:50:07.491857  576251 ubuntu.go:190] setting up certificates
	I1206 11:50:07.491867  576251 provision.go:84] configureAuth start
	I1206 11:50:07.491930  576251 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-508007
	I1206 11:50:07.509391  576251 provision.go:143] copyHostCerts
	I1206 11:50:07.509482  576251 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem, removing ...
	I1206 11:50:07.509503  576251 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem
	I1206 11:50:07.509578  576251 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/ca.pem (1082 bytes)
	I1206 11:50:07.509680  576251 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem, removing ...
	I1206 11:50:07.509691  576251 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem
	I1206 11:50:07.509719  576251 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/cert.pem (1123 bytes)
	I1206 11:50:07.509776  576251 exec_runner.go:144] found /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem, removing ...
	I1206 11:50:07.509786  576251 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem
	I1206 11:50:07.509810  576251 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22047-362985/.minikube/key.pem (1679 bytes)
	I1206 11:50:07.509863  576251 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem org=jenkins.pause-508007 san=[127.0.0.1 192.168.85.2 localhost minikube pause-508007]
	I1206 11:50:07.751466  576251 provision.go:177] copyRemoteCerts
	I1206 11:50:07.751567  576251 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1206 11:50:07.751625  576251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-508007
	I1206 11:50:07.769824  576251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/pause-508007/id_rsa Username:docker}
	I1206 11:50:07.875869  576251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1206 11:50:07.898578  576251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1206 11:50:07.917803  576251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1206 11:50:07.936775  576251 provision.go:87] duration metric: took 444.873288ms to configureAuth
	I1206 11:50:07.936915  576251 ubuntu.go:206] setting minikube options for container-runtime
	I1206 11:50:07.937241  576251 config.go:182] Loaded profile config "pause-508007": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 11:50:07.937431  576251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-508007
	I1206 11:50:07.955159  576251 main.go:143] libmachine: Using SSH client type: native
	I1206 11:50:07.955582  576251 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1206 11:50:07.955605  576251 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1206 11:50:13.361957  576251 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1206 11:50:13.361980  576251 machine.go:97] duration metric: took 6.398937713s to provisionDockerMachine
	I1206 11:50:13.361997  576251 start.go:293] postStartSetup for "pause-508007" (driver="docker")
	I1206 11:50:13.362044  576251 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1206 11:50:13.362213  576251 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1206 11:50:13.362257  576251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-508007
	I1206 11:50:13.383454  576251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/pause-508007/id_rsa Username:docker}
	I1206 11:50:13.491860  576251 ssh_runner.go:195] Run: cat /etc/os-release
	I1206 11:50:13.495370  576251 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1206 11:50:13.495466  576251 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1206 11:50:13.495486  576251 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-362985/.minikube/addons for local assets ...
	I1206 11:50:13.495549  576251 filesync.go:126] Scanning /home/jenkins/minikube-integration/22047-362985/.minikube/files for local assets ...
	I1206 11:50:13.495656  576251 filesync.go:149] local asset: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem -> 3648552.pem in /etc/ssl/certs
	I1206 11:50:13.495770  576251 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1206 11:50:13.503737  576251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem --> /etc/ssl/certs/3648552.pem (1708 bytes)
	I1206 11:50:13.522677  576251 start.go:296] duration metric: took 160.642182ms for postStartSetup
	I1206 11:50:13.522762  576251 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 11:50:13.522844  576251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-508007
	I1206 11:50:13.541046  576251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/pause-508007/id_rsa Username:docker}
	I1206 11:50:13.645150  576251 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1206 11:50:13.650544  576251 fix.go:56] duration metric: took 6.707785738s for fixHost
	I1206 11:50:13.650573  576251 start.go:83] releasing machines lock for "pause-508007", held for 6.707853143s
	I1206 11:50:13.650645  576251 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-508007
	I1206 11:50:13.668458  576251 ssh_runner.go:195] Run: cat /version.json
	I1206 11:50:13.668524  576251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-508007
	I1206 11:50:13.668584  576251 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1206 11:50:13.668652  576251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-508007
	I1206 11:50:13.698338  576251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/pause-508007/id_rsa Username:docker}
	I1206 11:50:13.699339  576251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/pause-508007/id_rsa Username:docker}
	I1206 11:50:13.803149  576251 ssh_runner.go:195] Run: systemctl --version
	I1206 11:50:13.894900  576251 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1206 11:50:13.934127  576251 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1206 11:50:13.938713  576251 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1206 11:50:13.938804  576251 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1206 11:50:13.946843  576251 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1206 11:50:13.946867  576251 start.go:496] detecting cgroup driver to use...
	I1206 11:50:13.946918  576251 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1206 11:50:13.947000  576251 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1206 11:50:13.962958  576251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1206 11:50:13.979989  576251 docker.go:218] disabling cri-docker service (if available) ...
	I1206 11:50:13.980081  576251 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1206 11:50:13.998986  576251 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1206 11:50:14.016953  576251 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1206 11:50:14.171293  576251 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1206 11:50:14.311817  576251 docker.go:234] disabling docker service ...
	I1206 11:50:14.311888  576251 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1206 11:50:14.327915  576251 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1206 11:50:14.341794  576251 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1206 11:50:14.485429  576251 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1206 11:50:14.626120  576251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1206 11:50:14.639854  576251 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1206 11:50:14.655018  576251 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1206 11:50:14.655155  576251 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:50:14.665068  576251 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1206 11:50:14.665196  576251 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:50:14.674838  576251 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:50:14.684604  576251 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:50:14.694540  576251 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1206 11:50:14.703422  576251 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:50:14.713028  576251 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:50:14.722446  576251 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1206 11:50:14.740342  576251 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1206 11:50:14.748999  576251 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1206 11:50:14.756980  576251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:50:14.892489  576251 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1206 11:50:15.165957  576251 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1206 11:50:15.166040  576251 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1206 11:50:15.170319  576251 start.go:564] Will wait 60s for crictl version
	I1206 11:50:15.170392  576251 ssh_runner.go:195] Run: which crictl
	I1206 11:50:15.174266  576251 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1206 11:50:15.200892  576251 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1206 11:50:15.201054  576251 ssh_runner.go:195] Run: crio --version
	I1206 11:50:15.231830  576251 ssh_runner.go:195] Run: crio --version
	I1206 11:50:15.264266  576251 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1206 11:50:15.267268  576251 cli_runner.go:164] Run: docker network inspect pause-508007 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1206 11:50:15.283519  576251 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1206 11:50:15.287662  576251 kubeadm.go:884] updating cluster {Name:pause-508007 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-508007 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1206 11:50:15.287847  576251 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1206 11:50:15.287912  576251 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:50:15.323532  576251 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 11:50:15.323559  576251 crio.go:433] Images already preloaded, skipping extraction
	I1206 11:50:15.323621  576251 ssh_runner.go:195] Run: sudo crictl images --output json
	I1206 11:50:15.349196  576251 crio.go:514] all images are preloaded for cri-o runtime.
	I1206 11:50:15.349218  576251 cache_images.go:86] Images are preloaded, skipping loading
	I1206 11:50:15.349226  576251 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1206 11:50:15.349322  576251 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-508007 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-508007 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1206 11:50:15.349404  576251 ssh_runner.go:195] Run: crio config
	I1206 11:50:15.416911  576251 cni.go:84] Creating CNI manager for ""
	I1206 11:50:15.416932  576251 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 11:50:15.416980  576251 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1206 11:50:15.417011  576251 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-508007 NodeName:pause-508007 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1206 11:50:15.417148  576251 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-508007"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1206 11:50:15.417223  576251 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1206 11:50:15.425213  576251 binaries.go:51] Found k8s binaries, skipping transfer
	I1206 11:50:15.425287  576251 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1206 11:50:15.433145  576251 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1206 11:50:15.446966  576251 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1206 11:50:15.461034  576251 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1206 11:50:15.479238  576251 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1206 11:50:15.485738  576251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:50:15.634489  576251 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 11:50:15.648639  576251 certs.go:69] Setting up /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/pause-508007 for IP: 192.168.85.2
	I1206 11:50:15.648704  576251 certs.go:195] generating shared ca certs ...
	I1206 11:50:15.648744  576251 certs.go:227] acquiring lock for ca certs: {Name:mke2ec61a37b6f3abbcbeb9abd23d6a19d011dd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:50:15.648901  576251 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22047-362985/.minikube/ca.key
	I1206 11:50:15.648961  576251 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.key
	I1206 11:50:15.648972  576251 certs.go:257] generating profile certs ...
	I1206 11:50:15.649057  576251 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/pause-508007/client.key
	I1206 11:50:15.649125  576251 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/pause-508007/apiserver.key.5feff3c6
	I1206 11:50:15.649170  576251 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/pause-508007/proxy-client.key
	I1206 11:50:15.649282  576251 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855.pem (1338 bytes)
	W1206 11:50:15.649318  576251 certs.go:480] ignoring /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855_empty.pem, impossibly tiny 0 bytes
	I1206 11:50:15.649331  576251 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca-key.pem (1675 bytes)
	I1206 11:50:15.649382  576251 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/ca.pem (1082 bytes)
	I1206 11:50:15.649413  576251 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/cert.pem (1123 bytes)
	I1206 11:50:15.649440  576251 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/certs/key.pem (1679 bytes)
	I1206 11:50:15.649492  576251 certs.go:484] found cert: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem (1708 bytes)
	I1206 11:50:15.650160  576251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1206 11:50:15.670192  576251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1206 11:50:15.688645  576251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1206 11:50:15.709146  576251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1206 11:50:15.727282  576251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/pause-508007/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1206 11:50:15.745689  576251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/pause-508007/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1206 11:50:15.764271  576251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/pause-508007/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1206 11:50:15.781847  576251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/pause-508007/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1206 11:50:15.799538  576251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/certs/364855.pem --> /usr/share/ca-certificates/364855.pem (1338 bytes)
	I1206 11:50:15.817024  576251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/ssl/certs/3648552.pem --> /usr/share/ca-certificates/3648552.pem (1708 bytes)
	I1206 11:50:15.834721  576251 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1206 11:50:15.853844  576251 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1206 11:50:15.872104  576251 ssh_runner.go:195] Run: openssl version
	I1206 11:50:15.879119  576251 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364855.pem
	I1206 11:50:15.890187  576251 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364855.pem /etc/ssl/certs/364855.pem
	I1206 11:50:15.899275  576251 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364855.pem
	I1206 11:50:15.906793  576251 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  6 10:36 /usr/share/ca-certificates/364855.pem
	I1206 11:50:15.906926  576251 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364855.pem
	I1206 11:50:15.957973  576251 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1206 11:50:15.967765  576251 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648552.pem
	I1206 11:50:15.986491  576251 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648552.pem /etc/ssl/certs/3648552.pem
	I1206 11:50:16.002237  576251 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648552.pem
	I1206 11:50:16.012077  576251 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  6 10:36 /usr/share/ca-certificates/3648552.pem
	I1206 11:50:16.012216  576251 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648552.pem
	I1206 11:50:16.090131  576251 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1206 11:50:16.116616  576251 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:50:16.135638  576251 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1206 11:50:16.161245  576251 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:50:16.171078  576251 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  6 10:26 /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:50:16.171208  576251 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1206 11:50:16.327506  576251 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1206 11:50:16.337536  576251 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1206 11:50:16.342588  576251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1206 11:50:16.400301  576251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1206 11:50:16.447822  576251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1206 11:50:16.502836  576251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1206 11:50:16.550468  576251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1206 11:50:16.601981  576251 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1206 11:50:16.647114  576251 kubeadm.go:401] StartCluster: {Name:pause-508007 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-508007 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:50:16.647241  576251 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1206 11:50:16.647335  576251 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1206 11:50:16.705778  576251 cri.go:89] found id: "133d8dbea499991081a35996bff0233170d2b1a868bf3857b507c1583c7a9ec1"
	I1206 11:50:16.705800  576251 cri.go:89] found id: "32544fef39c6ea0b6046c208ce2e51a0df4f3b80aa280b7c76f9760b3e50af4d"
	I1206 11:50:16.705804  576251 cri.go:89] found id: "2390e459c24305efc554a27a13a06262c95efaabcf624ba8fde250b8bad71a9a"
	I1206 11:50:16.705808  576251 cri.go:89] found id: "3090dcde82b5b2a345d93400577bfcb990c7d3e5129f6ec7e8a432e9ec254f4c"
	I1206 11:50:16.705812  576251 cri.go:89] found id: "aaae6a7e273cd079a387ceb0a651520860046fb1f5d6f1b68400e13685bcb58e"
	I1206 11:50:16.705815  576251 cri.go:89] found id: "29181bd0fdf81e45e507ff05c995486187c8a5460d27b3f5b331dbc17b03e19d"
	I1206 11:50:16.705818  576251 cri.go:89] found id: "ad2550eb9af565122164d80705b913cbc9bf4229a2e36150fead0da5caa0dbcd"
	I1206 11:50:16.705822  576251 cri.go:89] found id: "7d98153e0d77ef5e7104676cb3aa0afe2b6048a911a88464258274f081e58123"
	I1206 11:50:16.705825  576251 cri.go:89] found id: "4e43dc49fbc59be905132b7da10ac28f16fc2c8f552bd7b9c8f94927db5ca288"
	I1206 11:50:16.705857  576251 cri.go:89] found id: "e2d729fb666f2c3372fe7e8b35b9708de236469e7c191e9607f7590ca46e22a1"
	I1206 11:50:16.705867  576251 cri.go:89] found id: "80b9d15d2b9dbc9bd53e824fe457e3a98241cab7b0c1a75bc72b601c225e0a3e"
	I1206 11:50:16.705884  576251 cri.go:89] found id: "9947888e935f9906b6a8f63bf826008ea52d75f58d3993f4ddbe6b0e8d1b28bb"
	I1206 11:50:16.705894  576251 cri.go:89] found id: "102a55336c508b8d8bd4e02e9e13195ac8d952a43ccd1ad108693e4e8e3ddd88"
	I1206 11:50:16.705898  576251 cri.go:89] found id: "c1f11f7bb9c73c4b9f1bda5d2219c7bde9c1a3799ace49169aa65d33d5ef49d0"
	I1206 11:50:16.705902  576251 cri.go:89] found id: ""
	I1206 11:50:16.705967  576251 ssh_runner.go:195] Run: sudo runc list -f json
	W1206 11:50:16.726641  576251 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T11:50:16Z" level=error msg="open /run/runc: no such file or directory"
	I1206 11:50:16.726784  576251 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1206 11:50:16.744665  576251 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1206 11:50:16.744727  576251 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1206 11:50:16.744804  576251 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1206 11:50:16.757576  576251 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1206 11:50:16.758334  576251 kubeconfig.go:125] found "pause-508007" server: "https://192.168.85.2:8443"
	I1206 11:50:16.759282  576251 kapi.go:59] client config for pause-508007: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22047-362985/.minikube/profiles/pause-508007/client.crt", KeyFile:"/home/jenkins/minikube-integration/22047-362985/.minikube/profiles/pause-508007/client.key", CAFile:"/home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 11:50:16.760115  576251 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1206 11:50:16.760176  576251 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1206 11:50:16.760197  576251 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1206 11:50:16.760215  576251 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1206 11:50:16.760245  576251 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1206 11:50:16.760598  576251 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1206 11:50:16.770560  576251 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1206 11:50:16.770643  576251 kubeadm.go:602] duration metric: took 25.887874ms to restartPrimaryControlPlane
	I1206 11:50:16.770667  576251 kubeadm.go:403] duration metric: took 123.564692ms to StartCluster
	I1206 11:50:16.770706  576251 settings.go:142] acquiring lock: {Name:mk789e01bfd4ab9fa1e2a8415fa99b570b26926a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:50:16.770801  576251 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 11:50:16.771776  576251 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/kubeconfig: {Name:mk779651834cfbdc6f0b5e8f5a9abc0f05106181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 11:50:16.772086  576251 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1206 11:50:16.772741  576251 config.go:182] Loaded profile config "pause-508007": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 11:50:16.772704  576251 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1206 11:50:16.775275  576251 out.go:179] * Verifying Kubernetes components...
	I1206 11:50:16.775388  576251 out.go:179] * Enabled addons: 
	I1206 11:50:16.778195  576251 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1206 11:50:16.778327  576251 addons.go:530] duration metric: took 5.633081ms for enable addons: enabled=[]
	I1206 11:50:17.087086  576251 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1206 11:50:17.106560  576251 node_ready.go:35] waiting up to 6m0s for node "pause-508007" to be "Ready" ...
	I1206 11:50:20.615307  576251 node_ready.go:49] node "pause-508007" is "Ready"
	I1206 11:50:20.615338  576251 node_ready.go:38] duration metric: took 3.50873829s for node "pause-508007" to be "Ready" ...
	I1206 11:50:20.615352  576251 api_server.go:52] waiting for apiserver process to appear ...
	I1206 11:50:20.615445  576251 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:50:20.633944  576251 api_server.go:72] duration metric: took 3.861784957s to wait for apiserver process to appear ...
	I1206 11:50:20.633971  576251 api_server.go:88] waiting for apiserver healthz status ...
	I1206 11:50:20.633992  576251 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:50:20.681481  576251 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1206 11:50:20.681511  576251 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1206 11:50:21.134087  576251 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:50:21.144212  576251 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1206 11:50:21.144294  576251 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1206 11:50:21.635030  576251 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:50:21.645107  576251 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1206 11:50:21.645194  576251 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1206 11:50:22.135116  576251 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1206 11:50:22.145456  576251 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1206 11:50:22.146712  576251 api_server.go:141] control plane version: v1.34.2
	I1206 11:50:22.146749  576251 api_server.go:131] duration metric: took 1.512761104s to wait for apiserver health ...
	I1206 11:50:22.146776  576251 system_pods.go:43] waiting for kube-system pods to appear ...
	I1206 11:50:22.150073  576251 system_pods.go:59] 7 kube-system pods found
	I1206 11:50:22.150121  576251 system_pods.go:61] "coredns-66bc5c9577-94krv" [cb647f7b-cb31-4d99-9254-82bfdce366fc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 11:50:22.150130  576251 system_pods.go:61] "etcd-pause-508007" [7488d950-14a7-4590-9fd6-4510bcfe6034] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 11:50:22.150136  576251 system_pods.go:61] "kindnet-9zw56" [407010ad-d437-4e90-bbdc-f9eeb5479739] Running
	I1206 11:50:22.150142  576251 system_pods.go:61] "kube-apiserver-pause-508007" [3d567c62-c73b-4810-b13b-2ee29df3a862] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 11:50:22.150149  576251 system_pods.go:61] "kube-controller-manager-pause-508007" [c8b8d23a-135b-40a9-9fa9-036e8f53f330] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 11:50:22.150158  576251 system_pods.go:61] "kube-proxy-dn8b7" [9b63dc69-331f-47e6-b0bb-f21401e05ff6] Running
	I1206 11:50:22.150164  576251 system_pods.go:61] "kube-scheduler-pause-508007" [2608d0ee-fbb1-4dc8-b227-3d375af1cb7d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 11:50:22.150170  576251 system_pods.go:74] duration metric: took 3.381066ms to wait for pod list to return data ...
	I1206 11:50:22.150187  576251 default_sa.go:34] waiting for default service account to be created ...
	I1206 11:50:22.152933  576251 default_sa.go:45] found service account: "default"
	I1206 11:50:22.152964  576251 default_sa.go:55] duration metric: took 2.77055ms for default service account to be created ...
	I1206 11:50:22.152984  576251 system_pods.go:116] waiting for k8s-apps to be running ...
	I1206 11:50:22.157234  576251 system_pods.go:86] 7 kube-system pods found
	I1206 11:50:22.157271  576251 system_pods.go:89] "coredns-66bc5c9577-94krv" [cb647f7b-cb31-4d99-9254-82bfdce366fc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1206 11:50:22.157281  576251 system_pods.go:89] "etcd-pause-508007" [7488d950-14a7-4590-9fd6-4510bcfe6034] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1206 11:50:22.157287  576251 system_pods.go:89] "kindnet-9zw56" [407010ad-d437-4e90-bbdc-f9eeb5479739] Running
	I1206 11:50:22.157293  576251 system_pods.go:89] "kube-apiserver-pause-508007" [3d567c62-c73b-4810-b13b-2ee29df3a862] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1206 11:50:22.157300  576251 system_pods.go:89] "kube-controller-manager-pause-508007" [c8b8d23a-135b-40a9-9fa9-036e8f53f330] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1206 11:50:22.157305  576251 system_pods.go:89] "kube-proxy-dn8b7" [9b63dc69-331f-47e6-b0bb-f21401e05ff6] Running
	I1206 11:50:22.157317  576251 system_pods.go:89] "kube-scheduler-pause-508007" [2608d0ee-fbb1-4dc8-b227-3d375af1cb7d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1206 11:50:22.157335  576251 system_pods.go:126] duration metric: took 4.343897ms to wait for k8s-apps to be running ...
	I1206 11:50:22.157343  576251 system_svc.go:44] waiting for kubelet service to be running ....
	I1206 11:50:22.157402  576251 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 11:50:22.170301  576251 system_svc.go:56] duration metric: took 12.9479ms WaitForService to wait for kubelet
	I1206 11:50:22.170330  576251 kubeadm.go:587] duration metric: took 5.398175496s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1206 11:50:22.170349  576251 node_conditions.go:102] verifying NodePressure condition ...
	I1206 11:50:22.173328  576251 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1206 11:50:22.173366  576251 node_conditions.go:123] node cpu capacity is 2
	I1206 11:50:22.173379  576251 node_conditions.go:105] duration metric: took 3.025683ms to run NodePressure ...
	I1206 11:50:22.173392  576251 start.go:242] waiting for startup goroutines ...
	I1206 11:50:22.173430  576251 start.go:247] waiting for cluster config update ...
	I1206 11:50:22.173446  576251 start.go:256] writing updated cluster config ...
	I1206 11:50:22.173774  576251 ssh_runner.go:195] Run: rm -f paused
	I1206 11:50:22.177489  576251 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 11:50:22.178251  576251 kapi.go:59] client config for pause-508007: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22047-362985/.minikube/profiles/pause-508007/client.crt", KeyFile:"/home/jenkins/minikube-integration/22047-362985/.minikube/profiles/pause-508007/client.key", CAFile:"/home/jenkins/minikube-integration/22047-362985/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb3520), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1206 11:50:22.181467  576251 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-94krv" in "kube-system" namespace to be "Ready" or be gone ...
	W1206 11:50:24.187279  576251 pod_ready.go:104] pod "coredns-66bc5c9577-94krv" is not "Ready", error: <nil>
	W1206 11:50:26.686726  576251 pod_ready.go:104] pod "coredns-66bc5c9577-94krv" is not "Ready", error: <nil>
	I1206 11:50:28.186934  576251 pod_ready.go:94] pod "coredns-66bc5c9577-94krv" is "Ready"
	I1206 11:50:28.186964  576251 pod_ready.go:86] duration metric: took 6.005469329s for pod "coredns-66bc5c9577-94krv" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:50:28.189587  576251 pod_ready.go:83] waiting for pod "etcd-pause-508007" in "kube-system" namespace to be "Ready" or be gone ...
	W1206 11:50:30.195679  576251 pod_ready.go:104] pod "etcd-pause-508007" is not "Ready", error: <nil>
	W1206 11:50:32.695686  576251 pod_ready.go:104] pod "etcd-pause-508007" is not "Ready", error: <nil>
	W1206 11:50:35.196461  576251 pod_ready.go:104] pod "etcd-pause-508007" is not "Ready", error: <nil>
	I1206 11:50:35.695338  576251 pod_ready.go:94] pod "etcd-pause-508007" is "Ready"
	I1206 11:50:35.695367  576251 pod_ready.go:86] duration metric: took 7.505751324s for pod "etcd-pause-508007" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:50:35.697872  576251 pod_ready.go:83] waiting for pod "kube-apiserver-pause-508007" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:50:35.702433  576251 pod_ready.go:94] pod "kube-apiserver-pause-508007" is "Ready"
	I1206 11:50:35.702464  576251 pod_ready.go:86] duration metric: took 4.565028ms for pod "kube-apiserver-pause-508007" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:50:35.705092  576251 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-508007" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:50:36.211125  576251 pod_ready.go:94] pod "kube-controller-manager-pause-508007" is "Ready"
	I1206 11:50:36.211155  576251 pod_ready.go:86] duration metric: took 506.037784ms for pod "kube-controller-manager-pause-508007" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:50:36.213709  576251 pod_ready.go:83] waiting for pod "kube-proxy-dn8b7" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:50:36.293482  576251 pod_ready.go:94] pod "kube-proxy-dn8b7" is "Ready"
	I1206 11:50:36.293510  576251 pod_ready.go:86] duration metric: took 79.775674ms for pod "kube-proxy-dn8b7" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:50:36.492577  576251 pod_ready.go:83] waiting for pod "kube-scheduler-pause-508007" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:50:36.892888  576251 pod_ready.go:94] pod "kube-scheduler-pause-508007" is "Ready"
	I1206 11:50:36.892919  576251 pod_ready.go:86] duration metric: took 400.311228ms for pod "kube-scheduler-pause-508007" in "kube-system" namespace to be "Ready" or be gone ...
	I1206 11:50:36.892933  576251 pod_ready.go:40] duration metric: took 14.715399961s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1206 11:50:36.946925  576251 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1206 11:50:36.949991  576251 out.go:179] * Done! kubectl is now configured to use "pause-508007" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 06 11:50:16 pause-508007 crio[2088]: time="2025-12-06T11:50:16.179697504Z" level=info msg="Started container" PID=2328 containerID=aaae6a7e273cd079a387ceb0a651520860046fb1f5d6f1b68400e13685bcb58e description=kube-system/kube-proxy-dn8b7/kube-proxy id=f1a63cd2-c150-4d8c-911d-b4b92c3474cb name=/runtime.v1.RuntimeService/StartContainer sandboxID=ce409ee17bf60174c3fc91203b275e307707688270607f0ce6b53d41d507ca8c
	Dec 06 11:50:16 pause-508007 crio[2088]: time="2025-12-06T11:50:16.181365227Z" level=info msg="Created container 133d8dbea499991081a35996bff0233170d2b1a868bf3857b507c1583c7a9ec1: kube-system/etcd-pause-508007/etcd" id=9f666972-7ec8-40d1-8f6d-63080e97739b name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 11:50:16 pause-508007 crio[2088]: time="2025-12-06T11:50:16.182206317Z" level=info msg="Starting container: 133d8dbea499991081a35996bff0233170d2b1a868bf3857b507c1583c7a9ec1" id=7fcecdc4-0dd5-47f8-a22d-76d5192e2503 name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 11:50:16 pause-508007 crio[2088]: time="2025-12-06T11:50:16.184471748Z" level=info msg="Created container 2390e459c24305efc554a27a13a06262c95efaabcf624ba8fde250b8bad71a9a: kube-system/kube-controller-manager-pause-508007/kube-controller-manager" id=13de4451-65c9-4796-b70b-e2956f4d33c8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 11:50:16 pause-508007 crio[2088]: time="2025-12-06T11:50:16.184561865Z" level=info msg="Created container 3090dcde82b5b2a345d93400577bfcb990c7d3e5129f6ec7e8a432e9ec254f4c: kube-system/coredns-66bc5c9577-94krv/coredns" id=b74c89f5-f783-435c-a22a-6447e4f542c2 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 11:50:16 pause-508007 crio[2088]: time="2025-12-06T11:50:16.186839292Z" level=info msg="Created container 32544fef39c6ea0b6046c208ce2e51a0df4f3b80aa280b7c76f9760b3e50af4d: kube-system/kube-scheduler-pause-508007/kube-scheduler" id=178ac305-4972-48e3-969e-fd85660ff3df name=/runtime.v1.RuntimeService/CreateContainer
	Dec 06 11:50:16 pause-508007 crio[2088]: time="2025-12-06T11:50:16.187735685Z" level=info msg="Started container" PID=2358 containerID=133d8dbea499991081a35996bff0233170d2b1a868bf3857b507c1583c7a9ec1 description=kube-system/etcd-pause-508007/etcd id=7fcecdc4-0dd5-47f8-a22d-76d5192e2503 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cd876164c8a62f0985897b1b47210f978c2fbfea1dc80865fd266f061091e046
	Dec 06 11:50:16 pause-508007 crio[2088]: time="2025-12-06T11:50:16.188233781Z" level=info msg="Starting container: 2390e459c24305efc554a27a13a06262c95efaabcf624ba8fde250b8bad71a9a" id=7cc8abf0-49c2-4ef6-86cc-1df924ccefcf name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 11:50:16 pause-508007 crio[2088]: time="2025-12-06T11:50:16.195923881Z" level=info msg="Starting container: 32544fef39c6ea0b6046c208ce2e51a0df4f3b80aa280b7c76f9760b3e50af4d" id=0d541e1e-caf9-4504-91ef-e1d6b4eb2e0a name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 11:50:16 pause-508007 crio[2088]: time="2025-12-06T11:50:16.196122693Z" level=info msg="Starting container: 3090dcde82b5b2a345d93400577bfcb990c7d3e5129f6ec7e8a432e9ec254f4c" id=3a9ce770-7ebc-46ae-b694-f0dc09af184e name=/runtime.v1.RuntimeService/StartContainer
	Dec 06 11:50:16 pause-508007 crio[2088]: time="2025-12-06T11:50:16.207842357Z" level=info msg="Started container" PID=2336 containerID=2390e459c24305efc554a27a13a06262c95efaabcf624ba8fde250b8bad71a9a description=kube-system/kube-controller-manager-pause-508007/kube-controller-manager id=7cc8abf0-49c2-4ef6-86cc-1df924ccefcf name=/runtime.v1.RuntimeService/StartContainer sandboxID=d6051cc0104001c01d60cbb9a07f055a443e4ab35eea18fbf12b59b917330579
	Dec 06 11:50:16 pause-508007 crio[2088]: time="2025-12-06T11:50:16.219925117Z" level=info msg="Started container" PID=2346 containerID=32544fef39c6ea0b6046c208ce2e51a0df4f3b80aa280b7c76f9760b3e50af4d description=kube-system/kube-scheduler-pause-508007/kube-scheduler id=0d541e1e-caf9-4504-91ef-e1d6b4eb2e0a name=/runtime.v1.RuntimeService/StartContainer sandboxID=83f52db18377aabbed472174907a8d372e8a2789ac5d5ebf0ba89973476a595e
	Dec 06 11:50:16 pause-508007 crio[2088]: time="2025-12-06T11:50:16.276257974Z" level=info msg="Started container" PID=2353 containerID=3090dcde82b5b2a345d93400577bfcb990c7d3e5129f6ec7e8a432e9ec254f4c description=kube-system/coredns-66bc5c9577-94krv/coredns id=3a9ce770-7ebc-46ae-b694-f0dc09af184e name=/runtime.v1.RuntimeService/StartContainer sandboxID=c17df71f20be6669ed130ce3e7b63e4e847d4e1429afec74caf2618fe5be4626
	Dec 06 11:50:26 pause-508007 crio[2088]: time="2025-12-06T11:50:26.434723293Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 06 11:50:26 pause-508007 crio[2088]: time="2025-12-06T11:50:26.439110842Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 06 11:50:26 pause-508007 crio[2088]: time="2025-12-06T11:50:26.439147814Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 06 11:50:26 pause-508007 crio[2088]: time="2025-12-06T11:50:26.439176352Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 06 11:50:26 pause-508007 crio[2088]: time="2025-12-06T11:50:26.442980519Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 06 11:50:26 pause-508007 crio[2088]: time="2025-12-06T11:50:26.443031777Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 06 11:50:26 pause-508007 crio[2088]: time="2025-12-06T11:50:26.443055384Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 06 11:50:26 pause-508007 crio[2088]: time="2025-12-06T11:50:26.446211956Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 06 11:50:26 pause-508007 crio[2088]: time="2025-12-06T11:50:26.446246352Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 06 11:50:26 pause-508007 crio[2088]: time="2025-12-06T11:50:26.446270533Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 06 11:50:26 pause-508007 crio[2088]: time="2025-12-06T11:50:26.449434916Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 06 11:50:26 pause-508007 crio[2088]: time="2025-12-06T11:50:26.449471266Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	133d8dbea4999       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   26 seconds ago       Running             etcd                      1                   cd876164c8a62       etcd-pause-508007                      kube-system
	32544fef39c6e       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   26 seconds ago       Running             kube-scheduler            1                   83f52db18377a       kube-scheduler-pause-508007            kube-system
	2390e459c2430       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   26 seconds ago       Running             kube-controller-manager   1                   d6051cc010400       kube-controller-manager-pause-508007   kube-system
	3090dcde82b5b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   26 seconds ago       Running             coredns                   1                   c17df71f20be6       coredns-66bc5c9577-94krv               kube-system
	aaae6a7e273cd       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   26 seconds ago       Running             kube-proxy                1                   ce409ee17bf60       kube-proxy-dn8b7                       kube-system
	29181bd0fdf81       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   26 seconds ago       Running             kindnet-cni               1                   24bfa60713736       kindnet-9zw56                          kube-system
	ad2550eb9af56       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   26 seconds ago       Running             kube-apiserver            1                   4a6975d5bdf80       kube-apiserver-pause-508007            kube-system
	7d98153e0d77e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   37 seconds ago       Exited              coredns                   0                   c17df71f20be6       coredns-66bc5c9577-94krv               kube-system
	4e43dc49fbc59       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   24bfa60713736       kindnet-9zw56                          kube-system
	e2d729fb666f2       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   About a minute ago   Exited              kube-proxy                0                   ce409ee17bf60       kube-proxy-dn8b7                       kube-system
	80b9d15d2b9db       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   About a minute ago   Exited              kube-scheduler            0                   83f52db18377a       kube-scheduler-pause-508007            kube-system
	9947888e935f9       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   About a minute ago   Exited              kube-controller-manager   0                   d6051cc010400       kube-controller-manager-pause-508007   kube-system
	102a55336c508       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   About a minute ago   Exited              kube-apiserver            0                   4a6975d5bdf80       kube-apiserver-pause-508007            kube-system
	c1f11f7bb9c73       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   About a minute ago   Exited              etcd                      0                   cd876164c8a62       etcd-pause-508007                      kube-system
	
	
	==> coredns [3090dcde82b5b2a345d93400577bfcb990c7d3e5129f6ec7e8a432e9ec254f4c] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58000 - 24814 "HINFO IN 2518647008278542066.4443455033724922931. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.0143583s
	
	
	==> coredns [7d98153e0d77ef5e7104676cb3aa0afe2b6048a911a88464258274f081e58123] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34852 - 22102 "HINFO IN 4574325125218278221.3142240415227874887. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.028101077s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-508007
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-508007
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a71f4ee951e001b59a7bfc83202c901c27a5d9b4
	                    minikube.k8s.io/name=pause-508007
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_06T11_49_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 06 Dec 2025 11:49:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-508007
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 06 Dec 2025 11:50:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 06 Dec 2025 11:50:04 +0000   Sat, 06 Dec 2025 11:49:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 06 Dec 2025 11:50:04 +0000   Sat, 06 Dec 2025 11:49:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 06 Dec 2025 11:50:04 +0000   Sat, 06 Dec 2025 11:49:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 06 Dec 2025 11:50:04 +0000   Sat, 06 Dec 2025 11:50:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-508007
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 276ce0203b90767726fe164c6931608e
	  System UUID:                9470a48f-3d2c-45e6-a671-15b0e69330d8
	  Boot ID:                    b73b980d-8d6b-40e0-82fa-5c1b47c1eef7
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-94krv                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     79s
	  kube-system                 etcd-pause-508007                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         85s
	  kube-system                 kindnet-9zw56                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      80s
	  kube-system                 kube-apiserver-pause-508007             250m (12%)    0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 kube-controller-manager-pause-508007    200m (10%)    0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-proxy-dn8b7                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         80s
	  kube-system                 kube-scheduler-pause-508007             100m (5%)     0 (0%)      0 (0%)           0 (0%)         86s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 79s                kube-proxy       
	  Normal   Starting                 20s                kube-proxy       
	  Warning  CgroupV1                 92s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  91s (x8 over 91s)  kubelet          Node pause-508007 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    91s (x8 over 91s)  kubelet          Node pause-508007 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     91s (x8 over 91s)  kubelet          Node pause-508007 status is now: NodeHasSufficientPID
	  Normal   Starting                 85s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 85s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  85s                kubelet          Node pause-508007 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    85s                kubelet          Node pause-508007 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     85s                kubelet          Node pause-508007 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           80s                node-controller  Node pause-508007 event: Registered Node pause-508007 in Controller
	  Normal   NodeReady                38s                kubelet          Node pause-508007 status is now: NodeReady
	  Normal   RegisteredNode           19s                node-controller  Node pause-508007 event: Registered Node pause-508007 in Controller
	
	
	==> dmesg <==
	[  +3.991957] overlayfs: idmapped layers are currently not supported
	[ +35.228669] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:15] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:16] overlayfs: idmapped layers are currently not supported
	[  +4.168000] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:17] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:18] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:19] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:24] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:25] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:26] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:27] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:28] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:29] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:30] overlayfs: idmapped layers are currently not supported
	[  +6.342378] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:31] overlayfs: idmapped layers are currently not supported
	[ +25.558454] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:32] overlayfs: idmapped layers are currently not supported
	[ +27.925408] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:33] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:34] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:37] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:38] overlayfs: idmapped layers are currently not supported
	[Dec 6 11:49] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [133d8dbea499991081a35996bff0233170d2b1a868bf3857b507c1583c7a9ec1] <==
	{"level":"warn","ts":"2025-12-06T11:50:19.018438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.055666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.072973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.095158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.122317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.139252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.149068Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.166069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.184596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.208433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.235050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.259707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.302050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.324376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.383514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.409511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.455912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.466585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.490854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.508319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.528231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.548710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.564946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.584811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:50:19.688050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53280","server-name":"","error":"EOF"}
	
	
	==> etcd [c1f11f7bb9c73c4b9f1bda5d2219c7bde9c1a3799ace49169aa65d33d5ef49d0] <==
	{"level":"warn","ts":"2025-12-06T11:49:14.178701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:49:14.195641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:49:14.218166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:49:14.245462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:49:14.258858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:49:14.287595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-06T11:49:14.337926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39608","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-06T11:50:08.143083Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-06T11:50:08.143144Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-508007","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-12-06T11:50:08.143235Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-06T11:50:08.303085Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-12-06T11:50:08.303241Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-06T11:50:08.303286Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-06T11:50:08.303294Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-12-06T11:50:08.303272Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-06T11:50:08.303346Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-06T11:50:08.303418Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-06T11:50:08.303464Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-06T11:50:08.303363Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-12-06T11:50:08.303548Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-06T11:50:08.303553Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-12-06T11:50:08.306990Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-12-06T11:50:08.307086Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-06T11:50:08.307121Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-06T11:50:08.307128Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-508007","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 11:50:42 up  3:33,  0 user,  load average: 1.67, 1.61, 1.82
	Linux pause-508007 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [29181bd0fdf81e45e507ff05c995486187c8a5460d27b3f5b331dbc17b03e19d] <==
	I1206 11:50:16.215355       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1206 11:50:16.226903       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1206 11:50:16.227054       1 main.go:148] setting mtu 1500 for CNI 
	I1206 11:50:16.227067       1 main.go:178] kindnetd IP family: "ipv4"
	I1206 11:50:16.227085       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-06T11:50:16Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1206 11:50:16.433993       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1206 11:50:16.434075       1 controller.go:381] "Waiting for informer caches to sync"
	I1206 11:50:16.434123       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1206 11:50:16.447248       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1206 11:50:20.614442       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1206 11:50:20.614515       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"networkpolicies\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1206 11:50:20.614590       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1206 11:50:20.614633       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1206 11:50:21.852091       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1206 11:50:21.852121       1 metrics.go:72] Registering metrics
	I1206 11:50:21.852170       1 controller.go:711] "Syncing nftables rules"
	I1206 11:50:26.434262       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1206 11:50:26.434322       1 main.go:301] handling current node
	I1206 11:50:36.434299       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1206 11:50:36.434335       1 main.go:301] handling current node
	
	
	==> kindnet [4e43dc49fbc59be905132b7da10ac28f16fc2c8f552bd7b9c8f94927db5ca288] <==
	I1206 11:49:23.514429       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1206 11:49:23.515407       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1206 11:49:23.515586       1 main.go:148] setting mtu 1500 for CNI 
	I1206 11:49:23.515626       1 main.go:178] kindnetd IP family: "ipv4"
	I1206 11:49:23.515661       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-06T11:49:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1206 11:49:23.714820       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1206 11:49:23.715571       1 controller.go:381] "Waiting for informer caches to sync"
	I1206 11:49:23.715636       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1206 11:49:23.715766       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1206 11:49:53.714884       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1206 11:49:53.715864       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1206 11:49:53.715874       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1206 11:49:53.715959       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1206 11:49:55.216092       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1206 11:49:55.216128       1 metrics.go:72] Registering metrics
	I1206 11:49:55.216194       1 controller.go:711] "Syncing nftables rules"
	I1206 11:50:03.720965       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1206 11:50:03.721098       1 main.go:301] handling current node
	
	
	==> kube-apiserver [102a55336c508b8d8bd4e02e9e13195ac8d952a43ccd1ad108693e4e8e3ddd88] <==
	W1206 11:50:08.169163       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.170783       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.170841       1 logging.go:55] [core] [Channel #12 SubChannel #14]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.170892       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.170935       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.170984       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.171028       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.171069       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.171112       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.171152       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.171203       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.171272       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.171488       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.171580       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.171648       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.172272       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.172366       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.172540       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.172755       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.172988       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.173034       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.173077       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.173119       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.173164       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1206 11:50:08.173201       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [ad2550eb9af565122164d80705b913cbc9bf4229a2e36150fead0da5caa0dbcd] <==
	I1206 11:50:20.650598       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1206 11:50:20.671349       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1206 11:50:20.677570       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1206 11:50:20.678376       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1206 11:50:20.715841       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1206 11:50:20.715880       1 policy_source.go:240] refreshing policies
	I1206 11:50:20.750772       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1206 11:50:20.750852       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1206 11:50:20.754500       1 cache.go:39] Caches are synced for autoregister controller
	I1206 11:50:20.765773       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1206 11:50:20.768751       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1206 11:50:20.769327       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1206 11:50:20.770258       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1206 11:50:20.776158       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1206 11:50:20.776214       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1206 11:50:20.776512       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1206 11:50:20.795946       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1206 11:50:20.814072       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1206 11:50:20.824362       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1206 11:50:21.465052       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1206 11:50:22.377015       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1206 11:50:23.834163       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1206 11:50:24.032750       1 controller.go:667] quota admission added evaluator for: endpoints
	I1206 11:50:24.082621       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1206 11:50:24.136184       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [2390e459c24305efc554a27a13a06262c95efaabcf624ba8fde250b8bad71a9a] <==
	I1206 11:50:23.742585       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1206 11:50:23.750879       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1206 11:50:23.750984       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1206 11:50:23.754181       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1206 11:50:23.754339       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1206 11:50:23.756981       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1206 11:50:23.757110       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1206 11:50:23.764965       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1206 11:50:23.765106       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 11:50:23.765143       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1206 11:50:23.765173       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1206 11:50:23.769515       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 11:50:23.769618       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1206 11:50:23.777383       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1206 11:50:23.779475       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1206 11:50:23.779495       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1206 11:50:23.779520       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1206 11:50:23.783990       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1206 11:50:23.786886       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 11:50:23.793665       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1206 11:50:23.793800       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1206 11:50:23.793847       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1206 11:50:23.793874       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1206 11:50:23.793901       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1206 11:50:23.801200       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	
	
	==> kube-controller-manager [9947888e935f9906b6a8f63bf826008ea52d75f58d3993f4ddbe6b0e8d1b28bb] <==
	I1206 11:49:22.053512       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1206 11:49:22.057902       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 11:49:22.067287       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1206 11:49:22.073070       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1206 11:49:22.074324       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1206 11:49:22.075585       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1206 11:49:22.075607       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1206 11:49:22.076131       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1206 11:49:22.077054       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1206 11:49:22.077060       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1206 11:49:22.077379       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1206 11:49:22.077473       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1206 11:49:22.077515       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1206 11:49:22.077678       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1206 11:49:22.077706       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1206 11:49:22.077715       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1206 11:49:22.077858       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1206 11:49:22.082604       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1206 11:49:22.082769       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1206 11:49:22.082893       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1206 11:49:22.082930       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1206 11:49:22.082961       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1206 11:49:22.095289       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-508007" podCIDRs=["10.244.0.0/24"]
	I1206 11:49:22.102748       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1206 11:50:07.034969       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [aaae6a7e273cd079a387ceb0a651520860046fb1f5d6f1b68400e13685bcb58e] <==
	I1206 11:50:17.139224       1 server_linux.go:53] "Using iptables proxy"
	I1206 11:50:19.208452       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1206 11:50:20.795764       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"pause-508007\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1206 11:50:22.309495       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 11:50:22.309547       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1206 11:50:22.309626       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 11:50:22.350104       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 11:50:22.350219       1 server_linux.go:132] "Using iptables Proxier"
	I1206 11:50:22.355261       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 11:50:22.355594       1 server.go:527] "Version info" version="v1.34.2"
	I1206 11:50:22.355781       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 11:50:22.357218       1 config.go:200] "Starting service config controller"
	I1206 11:50:22.357286       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 11:50:22.357329       1 config.go:106] "Starting endpoint slice config controller"
	I1206 11:50:22.357369       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 11:50:22.357414       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 11:50:22.357442       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 11:50:22.358094       1 config.go:309] "Starting node config controller"
	I1206 11:50:22.358144       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 11:50:22.358174       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 11:50:22.457719       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1206 11:50:22.457813       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 11:50:22.457840       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [e2d729fb666f2c3372fe7e8b35b9708de236469e7c191e9607f7590ca46e22a1] <==
	I1206 11:49:23.441920       1 server_linux.go:53] "Using iptables proxy"
	I1206 11:49:23.529125       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1206 11:49:23.629966       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1206 11:49:23.630006       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1206 11:49:23.630090       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1206 11:49:23.648605       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1206 11:49:23.648665       1 server_linux.go:132] "Using iptables Proxier"
	I1206 11:49:23.652513       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1206 11:49:23.653287       1 server.go:527] "Version info" version="v1.34.2"
	I1206 11:49:23.653335       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 11:49:23.656355       1 config.go:200] "Starting service config controller"
	I1206 11:49:23.656479       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1206 11:49:23.656527       1 config.go:106] "Starting endpoint slice config controller"
	I1206 11:49:23.656555       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1206 11:49:23.656593       1 config.go:403] "Starting serviceCIDR config controller"
	I1206 11:49:23.656623       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1206 11:49:23.658609       1 config.go:309] "Starting node config controller"
	I1206 11:49:23.658694       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1206 11:49:23.658726       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1206 11:49:23.756878       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1206 11:49:23.756966       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1206 11:49:23.756846       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [32544fef39c6ea0b6046c208ce2e51a0df4f3b80aa280b7c76f9760b3e50af4d] <==
	I1206 11:50:20.656127       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1206 11:50:20.658533       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1206 11:50:20.670070       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1206 11:50:20.670148       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 11:50:20.670394       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1206 11:50:20.692578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1206 11:50:20.707716       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 11:50:20.707866       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 11:50:20.707968       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 11:50:20.708065       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1206 11:50:20.708184       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1206 11:50:20.708259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 11:50:20.708339       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1206 11:50:20.708423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 11:50:20.708541       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1206 11:50:20.708649       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 11:50:20.708696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1206 11:50:20.708741       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 11:50:20.708800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 11:50:20.708835       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 11:50:20.708875       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 11:50:20.711746       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 11:50:20.711881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1206 11:50:20.711928       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1206 11:50:21.671357       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [80b9d15d2b9dbc9bd53e824fe457e3a98241cab7b0c1a75bc72b601c225e0a3e] <==
	E1206 11:49:15.117953       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1206 11:49:15.118013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 11:49:15.118533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1206 11:49:15.118600       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1206 11:49:15.120088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1206 11:49:15.969553       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1206 11:49:16.013662       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1206 11:49:16.118001       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1206 11:49:16.141596       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1206 11:49:16.155303       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1206 11:49:16.191125       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1206 11:49:16.208551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1206 11:49:16.343239       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1206 11:49:16.363687       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1206 11:49:16.370321       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1206 11:49:16.398612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1206 11:49:16.417477       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1206 11:49:16.433413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1206 11:49:18.602126       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 11:50:08.144415       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1206 11:50:08.144453       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1206 11:50:08.144479       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1206 11:50:08.144509       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1206 11:50:08.144777       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1206 11:50:08.144808       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 06 11:50:15 pause-508007 kubelet[1331]: E1206 11:50:15.941149    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-9zw56\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="407010ad-d437-4e90-bbdc-f9eeb5479739" pod="kube-system/kindnet-9zw56"
	Dec 06 11:50:15 pause-508007 kubelet[1331]: I1206 11:50:15.958669    1331 scope.go:117] "RemoveContainer" containerID="c1f11f7bb9c73c4b9f1bda5d2219c7bde9c1a3799ace49169aa65d33d5ef49d0"
	Dec 06 11:50:15 pause-508007 kubelet[1331]: E1206 11:50:15.959512    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-9zw56\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="407010ad-d437-4e90-bbdc-f9eeb5479739" pod="kube-system/kindnet-9zw56"
	Dec 06 11:50:15 pause-508007 kubelet[1331]: E1206 11:50:15.959821    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-94krv\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="cb647f7b-cb31-4d99-9254-82bfdce366fc" pod="kube-system/coredns-66bc5c9577-94krv"
	Dec 06 11:50:15 pause-508007 kubelet[1331]: E1206 11:50:15.960020    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-508007\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="3529c3ad39a1bd2ef07faa509a1790cc" pod="kube-system/kube-scheduler-pause-508007"
	Dec 06 11:50:15 pause-508007 kubelet[1331]: E1206 11:50:15.960441    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-508007\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="3f952fbcb2ee2eb41910ebc976cbed46" pod="kube-system/etcd-pause-508007"
	Dec 06 11:50:15 pause-508007 kubelet[1331]: E1206 11:50:15.960734    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-508007\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="81b9e1c9a2f1233c6200feb8279d93d7" pod="kube-system/kube-apiserver-pause-508007"
	Dec 06 11:50:15 pause-508007 kubelet[1331]: E1206 11:50:15.961057    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-508007\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="9ae96daea84f9f9533e7623250412599" pod="kube-system/kube-controller-manager-pause-508007"
	Dec 06 11:50:15 pause-508007 kubelet[1331]: E1206 11:50:15.961417    1331 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dn8b7\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="9b63dc69-331f-47e6-b0bb-f21401e05ff6" pod="kube-system/kube-proxy-dn8b7"
	Dec 06 11:50:20 pause-508007 kubelet[1331]: E1206 11:50:20.580170    1331 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-508007\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-508007' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Dec 06 11:50:20 pause-508007 kubelet[1331]: E1206 11:50:20.580317    1331 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-dn8b7\" is forbidden: User \"system:node:pause-508007\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-508007' and this object" podUID="9b63dc69-331f-47e6-b0bb-f21401e05ff6" pod="kube-system/kube-proxy-dn8b7"
	Dec 06 11:50:20 pause-508007 kubelet[1331]: E1206 11:50:20.581119    1331 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-508007\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-508007' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Dec 06 11:50:20 pause-508007 kubelet[1331]: E1206 11:50:20.581165    1331 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-508007\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-508007' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Dec 06 11:50:20 pause-508007 kubelet[1331]: E1206 11:50:20.602088    1331 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-9zw56\" is forbidden: User \"system:node:pause-508007\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-508007' and this object" podUID="407010ad-d437-4e90-bbdc-f9eeb5479739" pod="kube-system/kindnet-9zw56"
	Dec 06 11:50:20 pause-508007 kubelet[1331]: E1206 11:50:20.630545    1331 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-94krv\" is forbidden: User \"system:node:pause-508007\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-508007' and this object" podUID="cb647f7b-cb31-4d99-9254-82bfdce366fc" pod="kube-system/coredns-66bc5c9577-94krv"
	Dec 06 11:50:20 pause-508007 kubelet[1331]: E1206 11:50:20.651972    1331 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-508007\" is forbidden: User \"system:node:pause-508007\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-508007' and this object" podUID="3529c3ad39a1bd2ef07faa509a1790cc" pod="kube-system/kube-scheduler-pause-508007"
	Dec 06 11:50:20 pause-508007 kubelet[1331]: E1206 11:50:20.684594    1331 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-508007\" is forbidden: User \"system:node:pause-508007\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-508007' and this object" podUID="3f952fbcb2ee2eb41910ebc976cbed46" pod="kube-system/etcd-pause-508007"
	Dec 06 11:50:20 pause-508007 kubelet[1331]: E1206 11:50:20.706730    1331 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-508007\" is forbidden: User \"system:node:pause-508007\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-508007' and this object" podUID="81b9e1c9a2f1233c6200feb8279d93d7" pod="kube-system/kube-apiserver-pause-508007"
	Dec 06 11:50:20 pause-508007 kubelet[1331]: E1206 11:50:20.714750    1331 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-508007\" is forbidden: User \"system:node:pause-508007\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-508007' and this object" podUID="9ae96daea84f9f9533e7623250412599" pod="kube-system/kube-controller-manager-pause-508007"
	Dec 06 11:50:20 pause-508007 kubelet[1331]: E1206 11:50:20.727618    1331 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-508007\" is forbidden: User \"system:node:pause-508007\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-508007' and this object" podUID="9ae96daea84f9f9533e7623250412599" pod="kube-system/kube-controller-manager-pause-508007"
	Dec 06 11:50:20 pause-508007 kubelet[1331]: E1206 11:50:20.735697    1331 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-dn8b7\" is forbidden: User \"system:node:pause-508007\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-508007' and this object" podUID="9b63dc69-331f-47e6-b0bb-f21401e05ff6" pod="kube-system/kube-proxy-dn8b7"
	Dec 06 11:50:20 pause-508007 kubelet[1331]: E1206 11:50:20.779041    1331 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-9zw56\" is forbidden: User \"system:node:pause-508007\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-508007' and this object" podUID="407010ad-d437-4e90-bbdc-f9eeb5479739" pod="kube-system/kindnet-9zw56"
	Dec 06 11:50:37 pause-508007 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 06 11:50:37 pause-508007 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 06 11:50:37 pause-508007 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-508007 -n pause-508007
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-508007 -n pause-508007: exit status 2 (400.177948ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-508007 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (7200.126s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-056470 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gww4p" [b70b73c7-ad85-4bc1-9e72-e1edbe6b6fa3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
panic: test timed out after 2h0m0s
	running tests:
		TestNetworkPlugins (34m23s)
		TestNetworkPlugins/group/bridge (3s)
		TestNetworkPlugins/group/bridge/Start (3s)
		TestNetworkPlugins/group/kindnet (59s)
		TestNetworkPlugins/group/kindnet/NetCatPod (2s)
		TestStartStop (36m58s)
		TestStartStop/group (3s)

                                                
                                                
goroutine 6398 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2682 +0x2b0
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x38

                                                
                                                
goroutine 1 [chan receive, 30 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x4000398c40, 0x40006c7bb8)
	/usr/local/go/src/testing/testing.go:1940 +0x104
testing.runTests(0x40006340c0, {0x534c580, 0x2c, 0x2c}, {0x40006c7d08?, 0x125774?, 0x5374f80?})
	/usr/local/go/src/testing/testing.go:2475 +0x3b8
testing.(*M).Run(0x40008445a0)
	/usr/local/go/src/testing/testing.go:2337 +0x530
k8s.io/minikube/test/integration.TestMain(0x40008445a0)
	/home/jenkins/workspace/Build_Cross/test/integration/main_test.go:64 +0xf0
main.main()
	_testmain.go:133 +0x88

                                                
                                                
goroutine 124 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe900, {{0x36f34b0, 0x4000224080?}, 0x40006f56c0?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 149
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 6080 [chan receive, 2 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001b3ed80, 0x4000104150)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 6081
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 1039 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0x40017cf800, 0x40017ccee0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 759
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 3712 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x40018e2590, 0x17)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40018e2580)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701e00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40015f2b40)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4001502000?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b90?, 0x4000104150?}, 0x40019f26a8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b90, 0x4000104150}, 0x40013f5f38, {0x369d700, 0x4001b2c330}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x40019f27a8?, {0x369d700?, 0x4001b2c330?}, 0x50?, 0x90a297365747962?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001c8c260, 0x3b9aca00, 0x0, 0x1, 0x4000104150)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3726
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 158 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 157
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 157 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b90, 0x4000104150}, 0x40013e6f40, 0x4001316f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b90, 0x4000104150}, 0x38?, 0x40013e6f40, 0x40013e6f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b90?, 0x4000104150?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4000674480?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 125
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 125 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40014f4d20, 0x4000104150)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 149
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 1627 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b90, 0x4000104150}, 0x400009ef40, 0x400009ef88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b90, 0x4000104150}, 0x0?, 0x400009ef40, 0x400009ef88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b90?, 0x4000104150?}, 0x4001373c80?, 0x4000001180?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x40013a4600?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 1657
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 3725 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe900, {{0x36f34b0, 0x4000224080?}, 0x400173b800?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3708
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 6348 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x40018e2750, 0x0)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40018e2740)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701e00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x400137d500)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40013ebe88?, 0x21dd4?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b90?, 0x4000104150?}, 0x40013ebea8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b90, 0x4000104150}, 0x40006eaf38, {0x369d700, 0x4001766480}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x11?, {0x369d700?, 0x4001766480?}, 0x0?, 0x36e57f8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001936470, 0x3b9aca00, 0x0, 0x1, 0x4000104150)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 6345
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 1628 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 1627
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 3426 [chan receive]:
testing.(*testState).waitParallel(0x400013ca50)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1906 +0x4c4
testing.tRunner(0x4001646000, 0x339b6f8)
	/usr/local/go/src/testing/testing.go:1940 +0x104
created by testing.(*T).Run in goroutine 3228
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 156 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x40000f0990, 0x2d)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40000f0980)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701e00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40014f4d20)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40000dde30?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b90?, 0x4000104150?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b90, 0x4000104150}, 0x40014e1f38, {0x369d700, 0x400084cde0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f34b0?, {0x369d700?, 0x400084cde0?}, 0xd0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40007b1490, 0x3b9aca00, 0x0, 0x1, 0x4000104150)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 125
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 3730 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3729
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 6368 [syscall]:
syscall.Syscall6(0x5f, 0x3, 0x11, 0x4001317c38, 0x4, 0x40019be240, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:96 +0x2c
internal/syscall/unix.Waitid(0x4001317d98?, 0x1929a0?, 0xffffece131a3?, 0x0?, 0x40004a89c0?)
	/usr/local/go/src/internal/syscall/unix/waitid_linux.go:18 +0x44
os.(*Process).pidfdWait.func1(...)
	/usr/local/go/src/os/pidfd_linux.go:109
os.ignoringEINTR(...)
	/usr/local/go/src/os/file_posix.go:256
os.(*Process).pidfdWait(0x400060f280)
	/usr/local/go/src/os/pidfd_linux.go:108 +0x144
os.(*Process).wait(0x4001317d68?)
	/usr/local/go/src/os/exec_unix.go:25 +0x24
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:340
os/exec.(*Cmd).Wait(0x40002a4d80)
	/usr/local/go/src/os/exec/exec.go:922 +0x38
os/exec.(*Cmd).Run(0x40002a4d80)
	/usr/local/go/src/os/exec/exec.go:626 +0x38
k8s.io/minikube/test/integration.Run(0x400196fa40, 0x40002a4d80)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x154
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0x400196fa40)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:112 +0x44
testing.tRunner(0x400196fa40, 0x40023e67e0)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3528
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 6100 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 6099
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 1657 [chan receive, 81 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001ee83c0, 0x4000104150)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 1655
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 1656 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe900, {{0x36f34b0, 0x4000224080?}, 0x40002a5500?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 1655
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 2032 [chan send, 80 minutes]:
os/exec.(*Cmd).watchCtx(0x4001470900, 0x400158db90)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1445
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 1071 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0x400194e780, 0x4001914c40)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1070
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 3729 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b90, 0x4000104150}, 0x40019f9740, 0x40014e0f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b90, 0x4000104150}, 0x6d?, 0x40019f9740, 0x40019f9788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b90?, 0x4000104150?}, 0x0?, 0x40019f9750?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f34b0?, 0x4000224080?, 0x400173b800?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3726
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 1045 [select, 109 minutes]:
net/http.(*persistConn).writeLoop(0x400193e240)
	/usr/local/go/src/net/http/transport.go:2600 +0x94
created by net/http.(*Transport).dialConn in goroutine 1075
	/usr/local/go/src/net/http/transport.go:1948 +0x1164

                                                
                                                
goroutine 829 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 828
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 4230 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4229
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 1271 [IO wait, 109 minutes]:
internal/poll.runtime_pollWait(0xffff736c0800, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4001928680?, 0xdbd0c?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x4001928680)
	/usr/local/go/src/internal/poll/fd_unix.go:613 +0x21c
net.(*netFD).accept(0x4001928680)
	/usr/local/go/src/net/fd_unix.go:161 +0x28
net.(*TCPListener).accept(0x400197dd80)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x24
net.(*TCPListener).Accept(0x400197dd80)
	/usr/local/go/src/net/tcpsock.go:380 +0x2c
net/http.(*Server).Serve(0x40018a0500, {0x36d31a0, 0x400197dd80})
	/usr/local/go/src/net/http/server.go:3463 +0x24c
net/http.(*Server).ListenAndServe(0x40018a0500)
	/usr/local/go/src/net/http/server.go:3389 +0x80
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 1269
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x104

                                                
                                                
goroutine 813 [chan receive, 109 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40006e5320, 0x4000104150)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 811
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 828 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b90, 0x4000104150}, 0x40013e5f40, 0x4001319f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b90, 0x4000104150}, 0xc0?, 0x40013e5f40, 0x40013e5f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b90?, 0x4000104150?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x40002a4480?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 813
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 827 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0x4001f78690, 0x2b)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001f78680)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701e00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40006e5320)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x400017b570?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b90?, 0x4000104150?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b90, 0x4000104150}, 0x400131bf38, {0x369d700, 0x400155eff0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f34b0?, {0x369d700?, 0x400155eff0?}, 0x30?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40007b14e0, 0x3b9aca00, 0x0, 0x1, 0x4000104150)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 813
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 6370 [IO wait]:
internal/poll.runtime_pollWait(0xffff73b12200, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x40013b4b00?, 0x4000879000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x40013b4b00, {0x4000879000, 0x1800, 0x1800})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
net.(*netFD).Read(0x40013b4b00, {0x4000879000?, 0x4000879000?, 0x5?})
	/usr/local/go/src/net/fd_posix.go:68 +0x28
net.(*conn).Read(0x4001d86178, {0x4000879000?, 0x40013f4888?, 0x8b27c?})
	/usr/local/go/src/net/net.go:196 +0x34
crypto/tls.(*atLeastReader).Read(0x40012f2708, {0x4000879000?, 0x40013f48e8?, 0x2cb794?})
	/usr/local/go/src/crypto/tls/conn.go:816 +0x38
bytes.(*Buffer).ReadFrom(0x40004842a8, {0x369de20, 0x40012f2708})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
crypto/tls.(*Conn).readFromUntil(0x4000484008, {0xffff737b07a0, 0x400186ef60}, 0x40013f4990?)
	/usr/local/go/src/crypto/tls/conn.go:838 +0xcc
crypto/tls.(*Conn).readRecordOrCCS(0x4000484008, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:627 +0x340
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:589
crypto/tls.(*Conn).Read(0x4000484008, {0x40018f0000, 0x1000, 0x542e2c?})
	/usr/local/go/src/crypto/tls/conn.go:1392 +0x14c
bufio.(*Reader).Read(0x4001423da0, {0x40001b3700, 0x9, 0x542e44?})
	/usr/local/go/src/bufio/bufio.go:245 +0x188
io.ReadAtLeast({0x369bd60, 0x4001423da0}, {0x40001b3700, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x98
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0x40001b3700, 0x9, 0x4001767c80?}, {0x369bd60?, 0x4001423da0?})
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.43.0/http2/frame.go:242 +0x58
golang.org/x/net/http2.(*Framer).ReadFrame(0x40001b36c0)
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.43.0/http2/frame.go:506 +0x70
golang.org/x/net/http2.(*clientConnReadLoop).run(0x40013f4f98)
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.43.0/http2/transport.go:2258 +0xcc
golang.org/x/net/http2.(*ClientConn).readLoop(0x400196f880)
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.43.0/http2/transport.go:2127 +0x6c
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 6369
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.43.0/http2/transport.go:912 +0xae0

                                                
                                                
goroutine 4006 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x4001376d90, 0x16)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001376d80)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701e00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40015f3620)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x400158d6c0?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b90?, 0x4000104150?}, 0x40019f56a8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b90, 0x4000104150}, 0x40013faf38, {0x369d700, 0x40013cfb00}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x40019f57a8?, {0x369d700?, 0x40013cfb00?}, 0x40?, 0x40013d1200?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40017954c0, 0x3b9aca00, 0x0, 0x1, 0x4000104150)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4003
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 4003 [chan receive, 27 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40015f3620, 0x4000104150)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3982
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 4248 [chan receive, 13 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40019d0a20, 0x4000104150)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4246
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 992 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0x400177c600, 0x400174b3b0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 991
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 3150 [chan receive, 34 minutes]:
testing.(*T).Run(0x40015821c0, {0x296d53f?, 0xaa38dd7dff4?}, 0x400145a048)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestNetworkPlugins(0x40015821c0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:52 +0xe4
testing.tRunner(0x40015821c0, 0x339b4c8)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3531 [chan receive]:
testing.(*T).Run(0x40019c2c40, {0x2976268?, 0x3689fd0?}, 0x4001b2d710)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x40019c2c40)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:148 +0x724
testing.tRunner(0x40019c2c40, 0x4001928500)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3448
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 6344 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe900, {{0x36f34b0, 0x4000224080?}, 0x400196e700?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 6343
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 4007 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b90, 0x4000104150}, 0x4001408740, 0x4001408788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b90, 0x4000104150}, 0xf8?, 0x4001408740, 0x4001408788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b90?, 0x4000104150?}, 0x0?, 0x95c64?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x40007689c0?, 0x95c64?, 0x4000674480?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4003
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 6403 [select]:
os/exec.(*Cmd).watchCtx(0x40002a4d80, 0x4001914540)
	/usr/local/go/src/os/exec/exec.go:789 +0x70
created by os/exec.(*Cmd).Start in goroutine 6368
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 4228 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0x4001376510, 0x2)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001376500)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701e00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40019d0a20)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4000296070?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b90?, 0x4000104150?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b90, 0x4000104150}, 0x40014dbf38, {0x369d700, 0x400167a360}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f34b0?, {0x369d700?, 0x400167a360?}, 0x50?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001c8c160, 0x3b9aca00, 0x0, 0x1, 0x4000104150)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4248
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 812 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe900, {{0x36f34b0, 0x4000224080?}, 0x40002a4780?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 811
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 5422 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe900, {{0x36f34b0, 0x4000224080?}, 0x4001582fc0?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5421
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 3726 [chan receive, 32 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40015f2b40, 0x4000104150)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3708
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 6350 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 6349
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 630 [IO wait, 113 minutes]:
internal/poll.runtime_pollWait(0xffff73b12600, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x40000dbb80?, 0x2d970?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x40000dbb80)
	/usr/local/go/src/internal/poll/fd_unix.go:613 +0x21c
net.(*netFD).accept(0x40000dbb80)
	/usr/local/go/src/net/fd_unix.go:161 +0x28
net.(*TCPListener).accept(0x40007eb140)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x24
net.(*TCPListener).Accept(0x40007eb140)
	/usr/local/go/src/net/tcpsock.go:380 +0x2c
net/http.(*Server).Serve(0x40000fa600, {0x36d31a0, 0x40007eb140})
	/usr/local/go/src/net/http/server.go:3463 +0x24c
net/http.(*Server).ListenAndServe(0x40000fa600)
	/usr/local/go/src/net/http/server.go:3389 +0x80
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 628
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x104

                                                
                                                
goroutine 3528 [chan receive]:
testing.(*T).Run(0x40019c2000, {0x296d544?, 0x3689fd0?}, 0x40023e67e0)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x40019c2000)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:111 +0x4f4
testing.tRunner(0x40019c2000, 0x4001928300)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3448
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 6401 [IO wait]:
internal/poll.runtime_pollWait(0xffff73b12e00, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4001a8aba0?, 0x400175e245?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x4001a8aba0, {0x400175e245, 0x5bb, 0x5bb})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x400187e090, {0x400175e245?, 0x40019f9d68?, 0x8b27c?})
	/usr/local/go/src/os/file.go:144 +0x68
bytes.(*Buffer).ReadFrom(0x40023e6900, {0x369bad8, 0x4001d86078})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
io.copyBuffer({0x369bcc0, 0x40023e6900}, {0x369bad8, 0x4001d86078}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x400187e090?, {0x369bcc0, 0x40023e6900})
	/usr/local/go/src/os/file.go:295 +0x58
os.(*File).WriteTo(0x400187e090, {0x369bcc0, 0x40023e6900})
	/usr/local/go/src/os/file.go:273 +0x9c
io.copyBuffer({0x369bcc0, 0x40023e6900}, {0x369bb58, 0x400187e090}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x40
os/exec.(*Cmd).Start.func2(0x400196fa40?)
	/usr/local/go/src/os/exec/exec.go:749 +0x30
created by os/exec.(*Cmd).Start in goroutine 6368
	/usr/local/go/src/os/exec/exec.go:748 +0x6a4

                                                
                                                
goroutine 6345 [chan receive]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x400137d500, 0x4000104150)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 6343
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 3228 [chan receive, 37 minutes]:
testing.(*T).Run(0x4001582700, {0x296d53f?, 0x40014dff58?}, 0x339b6f8)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop(0x4001582700)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:46 +0x3c
testing.tRunner(0x4001582700, 0x339b510)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4229 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b90, 0x4000104150}, 0x400140af40, 0x400140af88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b90, 0x4000104150}, 0x58?, 0x400140af40, 0x400140af88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b90?, 0x4000104150?}, 0x36e57f8?, 0x4001c1be30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4000674300?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4248
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 1044 [select, 109 minutes]:
net/http.(*persistConn).readLoop(0x400193e240)
	/usr/local/go/src/net/http/transport.go:2398 +0xa6c
created by net/http.(*Transport).dialConn in goroutine 1075
	/usr/local/go/src/net/http/transport.go:1947 +0x111c

                                                
                                                
goroutine 5423 [chan receive, 5 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x400167c8a0, 0x4000104150)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5421
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 4002 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe900, {{0x36f34b0, 0x4000224080?}, 0x4001360300?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3982
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 5442 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0x40018e2cd0, 0x0)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40018e2cc0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701e00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x400167c8a0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40017ecfc0?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b90?, 0x4000104150?}, 0x40013eaea8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b90, 0x4000104150}, 0x400010cf38, {0x369d700, 0x400157e6f0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x11?, {0x369d700?, 0x400157e6f0?}, 0x28?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001b7b0b0, 0x3b9aca00, 0x0, 0x1, 0x4000104150)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5423
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 3448 [chan receive, 7 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x4001647500, 0x400145a048)
	/usr/local/go/src/testing/testing.go:1940 +0x104
created by testing.(*T).Run in goroutine 3150
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4008 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4007
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 3529 [chan receive, 34 minutes]:
testing.(*testState).waitParallel(0x400013ca50)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x40019c21c0)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x40019c21c0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:500 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x40019c21c0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x40019c21c0, 0x4001928400)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3448
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 1960 [chan send, 80 minutes]:
os/exec.(*Cmd).watchCtx(0x40002a4d80, 0x4000083110)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1959
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 6098 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x40013773d0, 0x0)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40013773c0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701e00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001b3ed80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4001f66e00?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b90?, 0x4000104150?}, 0x40019f9ea8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b90, 0x4000104150}, 0x40014def38, {0x369d700, 0x4001bef980}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x40019f9fa8?, {0x369d700?, 0x4001bef980?}, 0x90?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x400136bb90, 0x3b9aca00, 0x0, 0x1, 0x4000104150)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 6080
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 1626 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0x400060fc10, 0x24)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x400060fc00)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701e00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001ee83c0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40000837a0?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b90?, 0x4000104150?}, 0x400009f6a8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b90, 0x4000104150}, 0x40013f8f38, {0x369d700, 0x40019f10e0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f34b0?, {0x369d700?, 0x40019f10e0?}, 0x9c?, 0x40016d2480?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001564170, 0x3b9aca00, 0x0, 0x1, 0x4000104150)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 1657
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 6079 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe900, {{0x36f34b0, 0x4000224080?}, 0x4001434780?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 6081
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 1992 [chan send, 80 minutes]:
os/exec.(*Cmd).watchCtx(0x40002a5b00, 0x4000083f10)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1991
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 5158 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b90, 0x4000104150}, 0x4001406740, 0x4001406788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b90, 0x4000104150}, 0x0?, 0x4001406740, 0x4001406788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b90?, 0x4000104150?}, 0x36e57f8?, 0x4001476d20?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x4001476c40?, 0x0?, 0x4000674480?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5154
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 5157 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0x40018e3550, 0xf)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40018e3540)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701e00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001ee9b00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x4001c1b260?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b90?, 0x4000104150?}, 0x40019f96a8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b90, 0x4000104150}, 0x40006e7f38, {0x369d700, 0x400156a4e0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x11?, {0x369d700?, 0x400156a4e0?}, 0x28?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001794380, 0x3b9aca00, 0x0, 0x1, 0x4000104150)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5154
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 6099 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b90, 0x4000104150}, 0x40013e0740, 0x40013e0788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b90, 0x4000104150}, 0xac?, 0x40013e0740, 0x40013e0788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b90?, 0x4000104150?}, 0x0?, 0x40013e0750?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f34b0?, 0x4000224080?, 0x4001434780?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 6080
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 6402 [IO wait]:
internal/poll.runtime_pollWait(0xffff73677000, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x4001a8ac60?, 0x400132f503?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x4001a8ac60, {0x400132f503, 0x2afd, 0x2afd})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x400187e0a8, {0x400132f503?, 0x40019f7568?, 0x8b27c?})
	/usr/local/go/src/os/file.go:144 +0x68
bytes.(*Buffer).ReadFrom(0x40023e6930, {0x369bad8, 0x4001d86080})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
io.copyBuffer({0x369bcc0, 0x40023e6930}, {0x369bad8, 0x4001d86080}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x400187e0a8?, {0x369bcc0, 0x40023e6930})
	/usr/local/go/src/os/file.go:295 +0x58
os.(*File).WriteTo(0x400187e0a8, {0x369bcc0, 0x40023e6930})
	/usr/local/go/src/os/file.go:273 +0x9c
io.copyBuffer({0x369bcc0, 0x40023e6930}, {0x369bb58, 0x400187e0a8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x40
os/exec.(*Cmd).Start.func2(0x0?)
	/usr/local/go/src/os/exec/exec.go:749 +0x30
created by os/exec.(*Cmd).Start in goroutine 6368
	/usr/local/go/src/os/exec/exec.go:748 +0x6a4

                                                
                                                
goroutine 5159 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5158
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 6392 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x36e57f8, 0x40003128c0}, {0x36d3800, 0x4001979e60}, 0x1, 0x0, 0x40006adba0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/loop.go:66 +0x158
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36e57f8?, 0x4000277b90?}, 0x3b9aca00, 0x40006addc8?, 0x1, 0x40006adba0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:48 +0x8c
k8s.io/minikube/test/integration.PodWait({0x36e57f8, 0x4000277b90}, 0x40014cc540, {0x400147a710, 0xe}, {0x2971231, 0x7}, {0x2978483, 0xa}, 0xd18c2e2800)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:379 +0x22c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.4(0x40014cc540)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:163 +0x2a0
testing.tRunner(0x40014cc540, 0x4001b2d710)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3531
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4247 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe900, {{0x36f34b0, 0x4000224080?}, 0x4001582540?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4246
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 5154 [chan receive, 7 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001ee9b00, 0x4000104150)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5119
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 5153 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe900, {{0x36f34b0, 0x4000224080?}, 0x4001a5b6c0?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5119
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 5767 [chan receive, 4 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x400069bf20, 0x4000104150)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 5765
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 5766 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36fe900, {{0x36f34b0, 0x4000224080?}, 0x4001b37880?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 5765
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 5443 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b90, 0x4000104150}, 0x40013e0f40, 0x40013e0f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b90, 0x4000104150}, 0xf8?, 0x40013e0f40, 0x40013e0f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b90?, 0x4000104150?}, 0x0?, 0x40013e0f50?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f34b0?, 0x4000224080?, 0x4001582fc0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5423
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 5444 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5443
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 5769 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0x4001d858d0, 0x0)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x4001d858c0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3701e00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x400069bf20)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40018568c0?, 0x21dd4?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e5b90?, 0x4000104150?}, 0x400140bea8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e5b90, 0x4000104150}, 0x4001324f38, {0x369d700, 0x400167a180}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x11?, {0x369d700?, 0x400167a180?}, 0x1?, 0x36e57f8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001381330, 0x3b9aca00, 0x0, 0x1, 0x4000104150)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5767
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 6349 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b90, 0x4000104150}, 0x40019f4f40, 0x40019f4f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b90, 0x4000104150}, 0xd0?, 0x40019f4f40, 0x40019f4f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b90?, 0x4000104150?}, 0x0?, 0x40019f4f50?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f34b0?, 0x4000224080?, 0x400196e700?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 6345
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 5770 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e5b90, 0x4000104150}, 0x40013e4f40, 0x40013e4f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e5b90, 0x4000104150}, 0x0?, 0x40013e4f40, 0x40013e4f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e5b90?, 0x4000104150?}, 0x36e57f8?, 0x400174bf80?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x400174bea0?, 0x0?, 0x40019c2e00?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 5767
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 5771 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 5770
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                    

Test pass (240/316)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6.04
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.2/json-events 4.34
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.16
18 TestDownloadOnly/v1.34.2/DeleteAll 0.33
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.22
21 TestDownloadOnly/v1.35.0-beta.0/json-events 4.67
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.09
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.23
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.6
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
36 TestAddons/Setup 161.16
40 TestAddons/serial/GCPAuth/Namespaces 0.2
41 TestAddons/serial/GCPAuth/FakeCredentials 10.88
57 TestAddons/StoppedEnableDisable 12.69
58 TestCertOptions 41.24
59 TestCertExpiration 245.16
61 TestForceSystemdFlag 36.36
62 TestForceSystemdEnv 43.34
67 TestErrorSpam/setup 34.09
68 TestErrorSpam/start 0.83
69 TestErrorSpam/status 1.14
70 TestErrorSpam/pause 7.16
71 TestErrorSpam/unpause 5.52
72 TestErrorSpam/stop 1.54
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 80.41
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 28.74
79 TestFunctional/serial/KubeContext 0.07
80 TestFunctional/serial/KubectlGetPods 0.1
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.57
84 TestFunctional/serial/CacheCmd/cache/add_local 1.31
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.36
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.95
89 TestFunctional/serial/CacheCmd/cache/delete 0.12
90 TestFunctional/serial/MinikubeKubectlCmd 0.14
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
92 TestFunctional/serial/ExtraConfig 39.53
93 TestFunctional/serial/ComponentHealth 0.1
94 TestFunctional/serial/LogsCmd 1.52
95 TestFunctional/serial/LogsFileCmd 1.55
96 TestFunctional/serial/InvalidService 4.57
98 TestFunctional/parallel/ConfigCmd 0.52
99 TestFunctional/parallel/DashboardCmd 8.13
100 TestFunctional/parallel/DryRun 0.59
101 TestFunctional/parallel/InternationalLanguage 0.27
102 TestFunctional/parallel/StatusCmd 1.32
106 TestFunctional/parallel/ServiceCmdConnect 7.73
107 TestFunctional/parallel/AddonsCmd 0.22
108 TestFunctional/parallel/PersistentVolumeClaim 24.7
110 TestFunctional/parallel/SSHCmd 0.74
111 TestFunctional/parallel/CpCmd 2.71
113 TestFunctional/parallel/FileSync 0.38
114 TestFunctional/parallel/CertSync 2.41
118 TestFunctional/parallel/NodeLabels 0.22
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.67
122 TestFunctional/parallel/License 0.36
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.89
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.5
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
134 TestFunctional/parallel/ServiceCmd/DeployApp 7.24
135 TestFunctional/parallel/ServiceCmd/List 0.59
136 TestFunctional/parallel/ProfileCmd/profile_not_create 0.61
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.64
138 TestFunctional/parallel/ProfileCmd/profile_list 0.55
139 TestFunctional/parallel/ServiceCmd/HTTPS 0.52
140 TestFunctional/parallel/ProfileCmd/profile_json_output 0.59
141 TestFunctional/parallel/ServiceCmd/Format 0.56
142 TestFunctional/parallel/MountCmd/any-port 10.86
143 TestFunctional/parallel/ServiceCmd/URL 0.58
144 TestFunctional/parallel/MountCmd/specific-port 2.64
145 TestFunctional/parallel/Version/short 0.08
146 TestFunctional/parallel/Version/components 1.4
147 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
148 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
149 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
150 TestFunctional/parallel/ImageCommands/ImageListYaml 0.32
151 TestFunctional/parallel/ImageCommands/ImageBuild 4.16
152 TestFunctional/parallel/ImageCommands/Setup 0.67
153 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.7
154 TestFunctional/parallel/MountCmd/VerifyCleanup 2.7
155 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.07
156 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.35
157 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
158 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.17
159 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.18
160 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.51
161 TestFunctional/parallel/ImageCommands/ImageRemove 0.77
162 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.84
163 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.5
164 TestFunctional/delete_echo-server_images 0.05
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.05
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.59
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.14
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.06
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.06
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.33
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.96
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.12
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 0.98
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.14
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.49
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.46
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.25
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.15
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.68
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 2.17
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.37
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 2.15
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.77
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.36
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.06
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.51
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.43
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.24
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.23
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.26
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.72
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.3
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.6
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 1.04
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.35
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.15
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.15
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.19
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.44
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.68
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.86
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.56
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.1
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.43
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.39
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.43
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 1.94
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 1.82
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.04
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
264 TestMultiControlPlane/serial/StartCluster 209.16
265 TestMultiControlPlane/serial/DeployApp 8.06
266 TestMultiControlPlane/serial/PingHostFromPods 1.58
267 TestMultiControlPlane/serial/AddWorkerNode 59.4
268 TestMultiControlPlane/serial/NodeLabels 0.12
269 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.13
270 TestMultiControlPlane/serial/CopyFile 21.15
271 TestMultiControlPlane/serial/StopSecondaryNode 12.96
272 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.86
273 TestMultiControlPlane/serial/RestartSecondaryNode 21.11
274 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.13
275 TestMultiControlPlane/serial/RestartClusterKeepsNodes 118.15
276 TestMultiControlPlane/serial/DeleteSecondaryNode 12.29
277 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.81
278 TestMultiControlPlane/serial/StopCluster 36.21
279 TestMultiControlPlane/serial/RestartCluster 94.09
280 TestMultiControlPlane/serial/DegradedAfterClusterRestart 1.13
281 TestMultiControlPlane/serial/AddSecondaryNode 81.87
282 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.2
287 TestJSONOutput/start/Command 78.8
288 TestJSONOutput/start/Audit 0
290 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
291 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
294 TestJSONOutput/pause/Audit 0
296 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
297 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
300 TestJSONOutput/unpause/Audit 0
302 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
303 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
305 TestJSONOutput/stop/Command 5.89
306 TestJSONOutput/stop/Audit 0
308 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
309 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
310 TestErrorJSONOutput 0.25
312 TestKicCustomNetwork/create_custom_network 39.73
313 TestKicCustomNetwork/use_default_bridge_network 36.66
314 TestKicExistingNetwork 40.8
315 TestKicCustomSubnet 33.79
316 TestKicStaticIP 37.83
317 TestMainNoArgs 0.06
318 TestMinikubeProfile 70.57
321 TestMountStart/serial/StartWithMountFirst 9.21
322 TestMountStart/serial/VerifyMountFirst 0.3
323 TestMountStart/serial/StartWithMountSecond 8.87
324 TestMountStart/serial/VerifyMountSecond 0.28
325 TestMountStart/serial/DeleteFirst 1.74
326 TestMountStart/serial/VerifyMountPostDelete 0.28
327 TestMountStart/serial/Stop 1.29
328 TestMountStart/serial/RestartStopped 8.31
329 TestMountStart/serial/VerifyMountPostStop 0.29
332 TestMultiNode/serial/FreshStart2Nodes 105.75
333 TestMultiNode/serial/DeployApp2Nodes 5.04
334 TestMultiNode/serial/PingHostFrom2Pods 0.97
335 TestMultiNode/serial/AddNode 57.62
336 TestMultiNode/serial/MultiNodeLabels 0.1
337 TestMultiNode/serial/ProfileList 0.74
338 TestMultiNode/serial/CopyFile 10.8
339 TestMultiNode/serial/StopNode 2.51
340 TestMultiNode/serial/StartAfterStop 8.46
341 TestMultiNode/serial/RestartKeepsNodes 76.28
342 TestMultiNode/serial/DeleteNode 5.81
343 TestMultiNode/serial/StopMultiNode 24.08
344 TestMultiNode/serial/RestartMultiNode 49.32
345 TestMultiNode/serial/ValidateNameConflict 36.54
350 TestPreload 123.22
352 TestScheduledStopUnix 109.27
355 TestInsufficientStorage 12.73
356 TestRunningBinaryUpgrade 300.21
359 TestMissingContainerUpgrade 119.47
361 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
362 TestNoKubernetes/serial/StartWithK8s 44.17
363 TestNoKubernetes/serial/StartWithStopK8s 27.94
364 TestNoKubernetes/serial/Start 8.38
365 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
366 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
367 TestNoKubernetes/serial/ProfileList 0.7
368 TestNoKubernetes/serial/Stop 1.3
369 TestNoKubernetes/serial/StartNoArgs 7.79
370 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.4
371 TestStoppedBinaryUpgrade/Setup 1.06
372 TestStoppedBinaryUpgrade/Upgrade 307.01
373 TestStoppedBinaryUpgrade/MinikubeLogs 1.79
382 TestPause/serial/Start 83.16
383 TestPause/serial/SecondStartNoReconfiguration 30.34
x
+
TestDownloadOnly/v1.28.0/json-events (6.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-807014 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-807014 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.037846975s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1206 10:25:47.603154  364855 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1206 10:25:47.603233  364855 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-807014
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-807014: exit status 85 (91.972209ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-807014 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-807014 │ jenkins │ v1.37.0 │ 06 Dec 25 10:25 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:25:41
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:25:41.612346  364861 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:25:41.612554  364861 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:25:41.612582  364861 out.go:374] Setting ErrFile to fd 2...
	I1206 10:25:41.612605  364861 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:25:41.613501  364861 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	W1206 10:25:41.613731  364861 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22047-362985/.minikube/config/config.json: open /home/jenkins/minikube-integration/22047-362985/.minikube/config/config.json: no such file or directory
	I1206 10:25:41.614229  364861 out.go:368] Setting JSON to true
	I1206 10:25:41.615182  364861 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":7693,"bootTime":1765009049,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 10:25:41.615262  364861 start.go:143] virtualization:  
	I1206 10:25:41.620447  364861 out.go:99] [download-only-807014] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1206 10:25:41.620623  364861 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball: no such file or directory
	I1206 10:25:41.620757  364861 notify.go:221] Checking for updates...
	I1206 10:25:41.623951  364861 out.go:171] MINIKUBE_LOCATION=22047
	I1206 10:25:41.627166  364861 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:25:41.630308  364861 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 10:25:41.633287  364861 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-362985/.minikube
	I1206 10:25:41.636351  364861 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1206 10:25:41.642387  364861 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1206 10:25:41.642693  364861 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:25:41.674745  364861 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:25:41.674939  364861 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:25:41.732835  364861 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-06 10:25:41.722781974 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:25:41.732942  364861 docker.go:319] overlay module found
	I1206 10:25:41.735985  364861 out.go:99] Using the docker driver based on user configuration
	I1206 10:25:41.736028  364861 start.go:309] selected driver: docker
	I1206 10:25:41.736036  364861 start.go:927] validating driver "docker" against <nil>
	I1206 10:25:41.736162  364861 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:25:41.798276  364861 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-06 10:25:41.789035296 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:25:41.798437  364861 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 10:25:41.798746  364861 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1206 10:25:41.798910  364861 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1206 10:25:41.802114  364861 out.go:171] Using Docker driver with root privileges
	I1206 10:25:41.805278  364861 cni.go:84] Creating CNI manager for ""
	I1206 10:25:41.805359  364861 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1206 10:25:41.805374  364861 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1206 10:25:41.805473  364861 start.go:353] cluster config:
	{Name:download-only-807014 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-807014 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:25:41.808656  364861 out.go:99] Starting "download-only-807014" primary control-plane node in "download-only-807014" cluster
	I1206 10:25:41.808689  364861 cache.go:134] Beginning downloading kic base image for docker with crio
	I1206 10:25:41.811650  364861 out.go:99] Pulling base image v0.0.48-1764843390-22032 ...
	I1206 10:25:41.811737  364861 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1206 10:25:41.811823  364861 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local docker daemon
	I1206 10:25:41.828625  364861 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 to local cache
	I1206 10:25:41.828842  364861 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 in local cache directory
	I1206 10:25:41.828946  364861 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 to local cache
	I1206 10:25:41.864342  364861 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1206 10:25:41.864371  364861 cache.go:65] Caching tarball of preloaded images
	I1206 10:25:41.864552  364861 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1206 10:25:41.867947  364861 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1206 10:25:41.867985  364861 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1206 10:25:41.954363  364861 preload.go:295] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1206 10:25:41.954548  364861 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1206 10:25:45.870429  364861 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1206 10:25:45.870881  364861 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/download-only-807014/config.json ...
	I1206 10:25:45.870920  364861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/download-only-807014/config.json: {Name:mk709f346323b46356fc3d043a898fbea0c50802 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1206 10:25:45.871141  364861 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1206 10:25:45.871352  364861 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22047-362985/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-807014 host does not exist
	  To start a cluster, run: "minikube start -p download-only-807014"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-807014
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (4.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-154072 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-154072 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.341201388s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (4.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1206 10:25:52.404664  364855 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1206 10:25:52.404702  364855 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-154072
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-154072: exit status 85 (157.676865ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-807014 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-807014 │ jenkins │ v1.37.0 │ 06 Dec 25 10:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 06 Dec 25 10:25 UTC │ 06 Dec 25 10:25 UTC │
	│ delete  │ -p download-only-807014                                                                                                                                                   │ download-only-807014 │ jenkins │ v1.37.0 │ 06 Dec 25 10:25 UTC │ 06 Dec 25 10:25 UTC │
	│ start   │ -o=json --download-only -p download-only-154072 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-154072 │ jenkins │ v1.37.0 │ 06 Dec 25 10:25 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:25:48
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:25:48.107256  365056 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:25:48.107418  365056 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:25:48.107430  365056 out.go:374] Setting ErrFile to fd 2...
	I1206 10:25:48.107434  365056 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:25:48.107705  365056 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 10:25:48.108130  365056 out.go:368] Setting JSON to true
	I1206 10:25:48.108968  365056 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":7699,"bootTime":1765009049,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 10:25:48.109039  365056 start.go:143] virtualization:  
	I1206 10:25:48.112507  365056 out.go:99] [download-only-154072] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:25:48.112739  365056 notify.go:221] Checking for updates...
	I1206 10:25:48.115571  365056 out.go:171] MINIKUBE_LOCATION=22047
	I1206 10:25:48.118529  365056 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:25:48.121404  365056 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 10:25:48.124462  365056 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-362985/.minikube
	I1206 10:25:48.127440  365056 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1206 10:25:48.133246  365056 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1206 10:25:48.133539  365056 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:25:48.160931  365056 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:25:48.161051  365056 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:25:48.221335  365056 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-06 10:25:48.211824967 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:25:48.221441  365056 docker.go:319] overlay module found
	I1206 10:25:48.224481  365056 out.go:99] Using the docker driver based on user configuration
	I1206 10:25:48.224518  365056 start.go:309] selected driver: docker
	I1206 10:25:48.224526  365056 start.go:927] validating driver "docker" against <nil>
	I1206 10:25:48.224641  365056 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:25:48.279868  365056 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-12-06 10:25:48.27059074 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:25:48.280031  365056 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 10:25:48.280295  365056 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1206 10:25:48.280456  365056 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1206 10:25:48.283587  365056 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-154072 host does not exist
	  To start a cluster, run: "minikube start -p download-only-154072"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-154072
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (4.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-171878 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-171878 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.670157926s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (4.67s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1206 10:25:57.783704  364855 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
I1206 10:25:57.783744  364855 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22047-362985/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-171878
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-171878: exit status 85 (89.408273ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                       ARGS                                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-807014 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-807014 │ jenkins │ v1.37.0 │ 06 Dec 25 10:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 06 Dec 25 10:25 UTC │ 06 Dec 25 10:25 UTC │
	│ delete  │ -p download-only-807014                                                                                                                                                          │ download-only-807014 │ jenkins │ v1.37.0 │ 06 Dec 25 10:25 UTC │ 06 Dec 25 10:25 UTC │
	│ start   │ -o=json --download-only -p download-only-154072 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-154072 │ jenkins │ v1.37.0 │ 06 Dec 25 10:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 06 Dec 25 10:25 UTC │ 06 Dec 25 10:25 UTC │
	│ delete  │ -p download-only-154072                                                                                                                                                          │ download-only-154072 │ jenkins │ v1.37.0 │ 06 Dec 25 10:25 UTC │ 06 Dec 25 10:25 UTC │
	│ start   │ -o=json --download-only -p download-only-171878 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-171878 │ jenkins │ v1.37.0 │ 06 Dec 25 10:25 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/06 10:25:53
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1206 10:25:53.158756  365253 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:25:53.158984  365253 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:25:53.159012  365253 out.go:374] Setting ErrFile to fd 2...
	I1206 10:25:53.159034  365253 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:25:53.159483  365253 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 10:25:53.160066  365253 out.go:368] Setting JSON to true
	I1206 10:25:53.161047  365253 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":7705,"bootTime":1765009049,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 10:25:53.161176  365253 start.go:143] virtualization:  
	I1206 10:25:53.206679  365253 out.go:99] [download-only-171878] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:25:53.206944  365253 notify.go:221] Checking for updates...
	I1206 10:25:53.237714  365253 out.go:171] MINIKUBE_LOCATION=22047
	I1206 10:25:53.274488  365253 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:25:53.318352  365253 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 10:25:53.350511  365253 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-362985/.minikube
	I1206 10:25:53.380818  365253 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1206 10:25:53.445196  365253 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1206 10:25:53.445544  365253 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:25:53.468945  365253 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:25:53.469074  365253 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:25:53.530277  365253 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-06 10:25:53.520425637 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:25:53.530387  365253 docker.go:319] overlay module found
	I1206 10:25:53.567138  365253 out.go:99] Using the docker driver based on user configuration
	I1206 10:25:53.567202  365253 start.go:309] selected driver: docker
	I1206 10:25:53.567210  365253 start.go:927] validating driver "docker" against <nil>
	I1206 10:25:53.567334  365253 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:25:53.631779  365253 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-06 10:25:53.62279696 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:25:53.631959  365253 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1206 10:25:53.632241  365253 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1206 10:25:53.632392  365253 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1206 10:25:53.662979  365253 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-171878 host does not exist
	  To start a cluster, run: "minikube start -p download-only-171878"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-171878
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I1206 10:25:59.103103  364855 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-657674 --alsologtostderr --binary-mirror http://127.0.0.1:46333 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-657674" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-657674
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-545880
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-545880: exit status 85 (67.8118ms)

                                                
                                                
-- stdout --
	* Profile "addons-545880" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-545880"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-545880
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-545880: exit status 85 (78.406363ms)

                                                
                                                
-- stdout --
	* Profile "addons-545880" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-545880"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (161.16s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-545880 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-545880 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m41.158162199s)
--- PASS: TestAddons/Setup (161.16s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.2s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-545880 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-545880 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.20s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.88s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-545880 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-545880 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [7eb75450-8c10-4096-88ce-6b4ee4f0598f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [7eb75450-8c10-4096-88ce-6b4ee4f0598f] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003613987s
addons_test.go:694: (dbg) Run:  kubectl --context addons-545880 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-545880 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-545880 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-545880 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.88s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.69s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-545880
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-545880: (12.389737469s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-545880
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-545880
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-545880
--- PASS: TestAddons/StoppedEnableDisable (12.69s)

                                                
                                    
x
+
TestCertOptions (41.24s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-031349 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-031349 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (38.192883568s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-031349 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-031349 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-031349 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-031349" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-031349
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-031349: (2.291742672s)
--- PASS: TestCertOptions (41.24s)

                                                
                                    
x
+
TestCertExpiration (245.16s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-959577 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-959577 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (40.193427841s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-959577 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-959577 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (21.826100755s)
helpers_test.go:175: Cleaning up "cert-expiration-959577" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-959577
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-959577: (3.13433163s)
--- PASS: TestCertExpiration (245.16s)

                                                
                                    
x
+
TestForceSystemdFlag (36.36s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-551542 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-551542 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (33.122055555s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-551542 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-551542" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-551542
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-551542: (2.847511451s)
--- PASS: TestForceSystemdFlag (36.36s)

                                                
                                    
x
+
TestForceSystemdEnv (43.34s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-304642 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-304642 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (40.140085384s)
helpers_test.go:175: Cleaning up "force-systemd-env-304642" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-304642
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-304642: (3.20340306s)
--- PASS: TestForceSystemdEnv (43.34s)

                                                
                                    
x
+
TestErrorSpam/setup (34.09s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-056894 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-056894 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-056894 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-056894 --driver=docker  --container-runtime=crio: (34.087774684s)
--- PASS: TestErrorSpam/setup (34.09s)

                                                
                                    
x
+
TestErrorSpam/start (0.83s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-056894 --log_dir /tmp/nospam-056894 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-056894 --log_dir /tmp/nospam-056894 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-056894 --log_dir /tmp/nospam-056894 start --dry-run
--- PASS: TestErrorSpam/start (0.83s)

                                                
                                    
x
+
TestErrorSpam/status (1.14s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-056894 --log_dir /tmp/nospam-056894 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-056894 --log_dir /tmp/nospam-056894 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-056894 --log_dir /tmp/nospam-056894 status
--- PASS: TestErrorSpam/status (1.14s)

                                                
                                    
x
+
TestErrorSpam/pause (7.16s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-056894 --log_dir /tmp/nospam-056894 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-056894 --log_dir /tmp/nospam-056894 pause: exit status 80 (2.268352017s)

                                                
                                                
-- stdout --
	* Pausing node nospam-056894 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:32:42Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-056894 --log_dir /tmp/nospam-056894 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-056894 --log_dir /tmp/nospam-056894 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-056894 --log_dir /tmp/nospam-056894 pause: exit status 80 (2.413941968s)

                                                
                                                
-- stdout --
	* Pausing node nospam-056894 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:32:44Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-056894 --log_dir /tmp/nospam-056894 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-056894 --log_dir /tmp/nospam-056894 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-056894 --log_dir /tmp/nospam-056894 pause: exit status 80 (2.477992691s)

                                                
                                                
-- stdout --
	* Pausing node nospam-056894 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:32:46Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-056894 --log_dir /tmp/nospam-056894 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (7.16s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.52s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-056894 --log_dir /tmp/nospam-056894 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-056894 --log_dir /tmp/nospam-056894 unpause: exit status 80 (2.077074251s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-056894 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:32:48Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-056894 --log_dir /tmp/nospam-056894 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-056894 --log_dir /tmp/nospam-056894 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-056894 --log_dir /tmp/nospam-056894 unpause: exit status 80 (1.799035558s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-056894 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:32:50Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-056894 --log_dir /tmp/nospam-056894 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-056894 --log_dir /tmp/nospam-056894 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-056894 --log_dir /tmp/nospam-056894 unpause: exit status 80 (1.639697992s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-056894 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-06T10:32:52Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-056894 --log_dir /tmp/nospam-056894 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.52s)

                                                
                                    
x
+
TestErrorSpam/stop (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-056894 --log_dir /tmp/nospam-056894 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-056894 --log_dir /tmp/nospam-056894 stop: (1.320379342s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-056894 --log_dir /tmp/nospam-056894 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-056894 --log_dir /tmp/nospam-056894 stop
--- PASS: TestErrorSpam/stop (1.54s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/test/nested/copy/364855/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (80.41s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-205266 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1206 10:33:41.817964  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:33:41.824437  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:33:41.835920  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:33:41.857427  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:33:41.898919  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:33:41.980553  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:33:42.142473  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:33:42.463932  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:33:43.106066  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:33:44.387791  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:33:46.949802  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:33:52.072102  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 10:34:02.314182  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-205266 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m20.404572885s)
--- PASS: TestFunctional/serial/StartWithProxy (80.41s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (28.74s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1206 10:34:19.319515  364855 config.go:182] Loaded profile config "functional-205266": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-205266 --alsologtostderr -v=8
E1206 10:34:22.795582  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-205266 --alsologtostderr -v=8: (28.731382605s)
functional_test.go:678: soft start took 28.735921363s for "functional-205266" cluster.
I1206 10:34:48.052057  364855 config.go:182] Loaded profile config "functional-205266": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (28.74s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-205266 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-205266 cache add registry.k8s.io/pause:3.1: (1.238064797s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-205266 cache add registry.k8s.io/pause:3.3: (1.224483081s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-205266 cache add registry.k8s.io/pause:latest: (1.102597311s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.57s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-205266 /tmp/TestFunctionalserialCacheCmdcacheadd_local3627518801/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 cache add minikube-local-cache-test:functional-205266
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 cache delete minikube-local-cache-test:functional-205266
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-205266
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.95s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-205266 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (305.488055ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.95s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 kubectl -- --context functional-205266 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-205266 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.53s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-205266 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1206 10:35:03.757027  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-205266 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.529251801s)
functional_test.go:776: restart took 39.529785813s for "functional-205266" cluster.
I1206 10:35:35.460223  364855 config.go:182] Loaded profile config "functional-205266": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (39.53s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-205266 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-205266 logs: (1.524831638s)
--- PASS: TestFunctional/serial/LogsCmd (1.52s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 logs --file /tmp/TestFunctionalserialLogsFileCmd2162279700/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-205266 logs --file /tmp/TestFunctionalserialLogsFileCmd2162279700/001/logs.txt: (1.547147726s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.55s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.57s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-205266 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-205266
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-205266: exit status 115 (455.775099ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31515 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-205266 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.57s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-205266 config get cpus: exit status 14 (80.732035ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-205266 config get cpus: exit status 14 (99.911885ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-205266 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-205266 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 389948: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.13s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-205266 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-205266 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (214.657554ms)

                                                
                                                
-- stdout --
	* [functional-205266] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-362985/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-362985/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:36:14.806066  389680 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:36:14.806191  389680 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:36:14.806202  389680 out.go:374] Setting ErrFile to fd 2...
	I1206 10:36:14.806207  389680 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:36:14.806551  389680 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 10:36:14.807533  389680 out.go:368] Setting JSON to false
	I1206 10:36:14.808734  389680 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8326,"bootTime":1765009049,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 10:36:14.808859  389680 start.go:143] virtualization:  
	I1206 10:36:14.814054  389680 out.go:179] * [functional-205266] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 10:36:14.817088  389680 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 10:36:14.817161  389680 notify.go:221] Checking for updates...
	I1206 10:36:14.822934  389680 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:36:14.825828  389680 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 10:36:14.828679  389680 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-362985/.minikube
	I1206 10:36:14.831675  389680 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:36:14.834613  389680 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:36:14.837912  389680 config.go:182] Loaded profile config "functional-205266": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:36:14.838497  389680 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:36:14.882412  389680 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:36:14.882597  389680 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:36:14.948983  389680 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-06 10:36:14.938736493 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:36:14.949094  389680 docker.go:319] overlay module found
	I1206 10:36:14.952510  389680 out.go:179] * Using the docker driver based on existing profile
	I1206 10:36:14.955334  389680 start.go:309] selected driver: docker
	I1206 10:36:14.955356  389680 start.go:927] validating driver "docker" against &{Name:functional-205266 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-205266 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:36:14.955498  389680 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:36:14.958922  389680 out.go:203] 
	W1206 10:36:14.961833  389680 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1206 10:36:14.964842  389680 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-205266 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-205266 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-205266 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (265.751741ms)

                                                
                                                
-- stdout --
	* [functional-205266] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-362985/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-362985/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 10:36:14.559216  389635 out.go:360] Setting OutFile to fd 1 ...
	I1206 10:36:14.559461  389635 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:36:14.559491  389635 out.go:374] Setting ErrFile to fd 2...
	I1206 10:36:14.559513  389635 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 10:36:14.560627  389635 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 10:36:14.561181  389635 out.go:368] Setting JSON to false
	I1206 10:36:14.562428  389635 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8326,"bootTime":1765009049,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 10:36:14.562536  389635 start.go:143] virtualization:  
	I1206 10:36:14.566972  389635 out.go:179] * [functional-205266] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1206 10:36:14.570632  389635 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 10:36:14.570789  389635 notify.go:221] Checking for updates...
	I1206 10:36:14.576966  389635 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 10:36:14.579794  389635 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 10:36:14.582539  389635 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-362985/.minikube
	I1206 10:36:14.585674  389635 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 10:36:14.588524  389635 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 10:36:14.591974  389635 config.go:182] Loaded profile config "functional-205266": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 10:36:14.592567  389635 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 10:36:14.632303  389635 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 10:36:14.632416  389635 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 10:36:14.732972  389635 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-06 10:36:14.723798615 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 10:36:14.733092  389635 docker.go:319] overlay module found
	I1206 10:36:14.736369  389635 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1206 10:36:14.739274  389635 start.go:309] selected driver: docker
	I1206 10:36:14.739297  389635 start.go:927] validating driver "docker" against &{Name:functional-205266 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-205266 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 10:36:14.739566  389635 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 10:36:14.743186  389635 out.go:203] 
	W1206 10:36:14.746121  389635 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1206 10:36:14.749017  389635 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-205266 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-205266 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-cp7xl" [4b5cb742-7f97-4204-a17e-41c6f9cd236b] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-cp7xl" [4b5cb742-7f97-4204-a17e-41c6f9cd236b] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003528681s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:30129
functional_test.go:1680: http://192.168.49.2:30129: success! body:
Request served by hello-node-connect-7d85dfc575-cp7xl

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30129
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.73s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [b12f5332-0c35-49be-bd5c-4ef4a2dd81d2] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004155024s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-205266 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-205266 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-205266 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-205266 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [38f3c979-7f77-4b4f-b942-b7bfe7e3f553] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [38f3c979-7f77-4b4f-b942-b7bfe7e3f553] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.003691939s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-205266 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-205266 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-205266 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [1f140a23-1d28-40ad-a22d-389002590e5a] Pending
helpers_test.go:352: "sp-pod" [1f140a23-1d28-40ad-a22d-389002590e5a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [1f140a23-1d28-40ad-a22d-389002590e5a] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003817013s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-205266 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.70s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 ssh -n functional-205266 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 cp functional-205266:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd7459445/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 ssh -n functional-205266 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 ssh -n functional-205266 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.71s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/364855/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 ssh "sudo cat /etc/test/nested/copy/364855/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/364855.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 ssh "sudo cat /etc/ssl/certs/364855.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/364855.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 ssh "sudo cat /usr/share/ca-certificates/364855.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3648552.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 ssh "sudo cat /etc/ssl/certs/3648552.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3648552.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 ssh "sudo cat /usr/share/ca-certificates/3648552.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.41s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-205266 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-205266 ssh "sudo systemctl is-active docker": exit status 1 (333.665394ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-205266 ssh "sudo systemctl is-active containerd": exit status 1 (332.622329ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-205266 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-205266 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-205266 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 387420: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-205266 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-205266 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-205266 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [543722ff-44d6-4c63-b1a8-c2c323da794a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [543722ff-44d6-4c63-b1a8-c2c323da794a] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003739488s
I1206 10:35:54.863084  364855 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.50s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-205266 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.244.119 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-205266 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-205266 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-205266 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-vj9kp" [425909cb-0229-4f19-8e18-fb3975d25b3a] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-vj9kp" [425909cb-0229-4f19-8e18-fb3975d25b3a] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004534482s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 service list -o json
functional_test.go:1504: Took "638.12526ms" to run "out/minikube-linux-arm64 -p functional-205266 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "463.065613ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "86.786389ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:31829
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "516.097145ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "70.492729ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-205266 /tmp/TestFunctionalparallelMountCmdany-port2162271343/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765017372360331415" to /tmp/TestFunctionalparallelMountCmdany-port2162271343/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765017372360331415" to /tmp/TestFunctionalparallelMountCmdany-port2162271343/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765017372360331415" to /tmp/TestFunctionalparallelMountCmdany-port2162271343/001/test-1765017372360331415
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-205266 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (566.34191ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 10:36:12.927680  364855 retry.go:31] will retry after 294.319986ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  6 10:36 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  6 10:36 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  6 10:36 test-1765017372360331415
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 ssh cat /mount-9p/test-1765017372360331415
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-205266 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [9fd9a981-7829-42b6-8342-8e0fded79d2d] Pending
helpers_test.go:352: "busybox-mount" [9fd9a981-7829-42b6-8342-8e0fded79d2d] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [9fd9a981-7829-42b6-8342-8e0fded79d2d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [9fd9a981-7829-42b6-8342-8e0fded79d2d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.006751534s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-205266 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-205266 /tmp/TestFunctionalparallelMountCmdany-port2162271343/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.86s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31829
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-205266 /tmp/TestFunctionalparallelMountCmdspecific-port2446692903/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 ssh "findmnt -T /mount-9p | grep 9p"
2025/12/06 10:36:23 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-205266 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (592.613885ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 10:36:23.808896  364855 retry.go:31] will retry after 728.202458ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-205266 /tmp/TestFunctionalparallelMountCmdspecific-port2446692903/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-205266 ssh "sudo umount -f /mount-9p": exit status 1 (423.089162ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-205266 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-205266 /tmp/TestFunctionalparallelMountCmdspecific-port2446692903/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.64s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-205266 version -o=json --components: (1.401339519s)
--- PASS: TestFunctional/parallel/Version/components (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-205266 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-205266
localhost/kicbase/echo-server:functional-205266
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-205266 image ls --format short --alsologtostderr:
I1206 10:36:32.728789  392572 out.go:360] Setting OutFile to fd 1 ...
I1206 10:36:32.729076  392572 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:36:32.729108  392572 out.go:374] Setting ErrFile to fd 2...
I1206 10:36:32.729153  392572 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:36:32.729505  392572 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
I1206 10:36:32.730358  392572 config.go:182] Loaded profile config "functional-205266": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 10:36:32.730554  392572 config.go:182] Loaded profile config "functional-205266": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 10:36:32.731207  392572 cli_runner.go:164] Run: docker container inspect functional-205266 --format={{.State.Status}}
I1206 10:36:32.754583  392572 ssh_runner.go:195] Run: systemctl --version
I1206 10:36:32.754660  392572 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-205266
I1206 10:36:32.775858  392572 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-205266/id_rsa Username:docker}
I1206 10:36:32.898462  392572 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-205266 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ docker.io/library/nginx                 │ latest             │ bb747ca923a5e │ 176MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ docker.io/kicbase/echo-server           │ latest             │ ce2d2cda2d858 │ 4.79MB │
│ localhost/kicbase/echo-server           │ functional-205266  │ ce2d2cda2d858 │ 4.79MB │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 94bff1bec29fd │ 75.9MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 4f982e73e768a │ 51.6MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ localhost/minikube-local-cache-test     │ functional-205266  │ 6044d1a5e76fc │ 3.33kB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ 2c5f0dedd21c2 │ 60.9MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 1b34917560f09 │ 72.6MB │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ docker.io/library/nginx                 │ alpine             │ cbad6347cca28 │ 54.8MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ b178af3d91f80 │ 84.8MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-205266 image ls --format table --alsologtostderr:
I1206 10:36:33.531210  392792 out.go:360] Setting OutFile to fd 1 ...
I1206 10:36:33.531414  392792 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:36:33.531441  392792 out.go:374] Setting ErrFile to fd 2...
I1206 10:36:33.531470  392792 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:36:33.531777  392792 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
I1206 10:36:33.532474  392792 config.go:182] Loaded profile config "functional-205266": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 10:36:33.532697  392792 config.go:182] Loaded profile config "functional-205266": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 10:36:33.533319  392792 cli_runner.go:164] Run: docker container inspect functional-205266 --format={{.State.Status}}
I1206 10:36:33.558147  392792 ssh_runner.go:195] Run: systemctl --version
I1206 10:36:33.558212  392792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-205266
I1206 10:36:33.586981  392792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-205266/id_rsa Username:docker}
I1206 10:36:33.699043  392792 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-205266 image ls --format json --alsologtostderr:
[{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b","docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b","localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed37
0ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-205266"],"size":"4789170"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8e
e3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"60857170"},{"id":"b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9a94f333d6fe202d804910534ef052b2cfa650982cdcbe48e92339c8d314dd84","registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"84753391"},{"id":"bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712"],"repoTags":["docker.io/library/nginx:latest"],"size":"175943180"},{"id":"94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786","repoDigests":["registry.k8s.io/kube-prox
y@sha256:20a31b16a001e3e4db71a17ba8effc4b145a3afa2086e844ab40dc5baa5b8d12","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"75941783"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54837949"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5
a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db354394
9","repoDigests":["registry.k8s.io/kube-scheduler@sha256:3eff58b308cdc6c65cf030333090e14cc77bea4ed4ea9a92d212a0babc924ffe","registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"51592021"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"6044d1a5e76fc3db9ee5a0298b4e62a5c2d3a619a9ff917ee277cefa0e49b4a0","repoDigests":["localhost/minikube-local-cache-test@sha256
:5b377d95c8e53c387b469c0bb26f0a0db0d7b0a8046aed8bc6ba7cba62805742"],"repoTags":["localhost/minikube-local-cache-test:functional-205266"],"size":"3328"},{"id":"1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4b3abd4d4543ac8451f97e9771aa0a29a9958e51ac02fe44900b4a224031df89","registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"72629077"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-205266 image ls --format json --alsologtostderr:
I1206 10:36:33.222997  392705 out.go:360] Setting OutFile to fd 1 ...
I1206 10:36:33.223177  392705 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:36:33.223188  392705 out.go:374] Setting ErrFile to fd 2...
I1206 10:36:33.223194  392705 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:36:33.223466  392705 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
I1206 10:36:33.224099  392705 config.go:182] Loaded profile config "functional-205266": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 10:36:33.224227  392705 config.go:182] Loaded profile config "functional-205266": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 10:36:33.224755  392705 cli_runner.go:164] Run: docker container inspect functional-205266 --format={{.State.Status}}
I1206 10:36:33.257473  392705 ssh_runner.go:195] Run: systemctl --version
I1206 10:36:33.257525  392705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-205266
I1206 10:36:33.286111  392705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-205266/id_rsa Username:docker}
I1206 10:36:33.402798  392705 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-205266 image ls --format yaml --alsologtostderr:
- id: cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54837949"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "60857170"
- id: 1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4b3abd4d4543ac8451f97e9771aa0a29a9958e51ac02fe44900b4a224031df89
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "72629077"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b
- docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-205266
size: "4789170"
- id: bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:7de350c1fbb1f7b119a1d08f69fef5c92624cb01e03bc25c0ae11072b8969712
repoTags:
- docker.io/library/nginx:latest
size: "175943180"
- id: 6044d1a5e76fc3db9ee5a0298b4e62a5c2d3a619a9ff917ee277cefa0e49b4a0
repoDigests:
- localhost/minikube-local-cache-test@sha256:5b377d95c8e53c387b469c0bb26f0a0db0d7b0a8046aed8bc6ba7cba62805742
repoTags:
- localhost/minikube-local-cache-test:functional-205266
size: "3328"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9a94f333d6fe202d804910534ef052b2cfa650982cdcbe48e92339c8d314dd84
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "84753391"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786
repoDigests:
- registry.k8s.io/kube-proxy@sha256:20a31b16a001e3e4db71a17ba8effc4b145a3afa2086e844ab40dc5baa5b8d12
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "75941783"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: 4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:3eff58b308cdc6c65cf030333090e14cc77bea4ed4ea9a92d212a0babc924ffe
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "51592021"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-205266 image ls --format yaml --alsologtostderr:
I1206 10:36:32.882184  392621 out.go:360] Setting OutFile to fd 1 ...
I1206 10:36:32.882324  392621 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:36:32.882337  392621 out.go:374] Setting ErrFile to fd 2...
I1206 10:36:32.882344  392621 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:36:32.882775  392621 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
I1206 10:36:32.883868  392621 config.go:182] Loaded profile config "functional-205266": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 10:36:32.884085  392621 config.go:182] Loaded profile config "functional-205266": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 10:36:32.885246  392621 cli_runner.go:164] Run: docker container inspect functional-205266 --format={{.State.Status}}
I1206 10:36:32.917266  392621 ssh_runner.go:195] Run: systemctl --version
I1206 10:36:32.917319  392621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-205266
I1206 10:36:32.959096  392621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-205266/id_rsa Username:docker}
I1206 10:36:33.070723  392621 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-205266 ssh pgrep buildkitd: exit status 1 (390.64594ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 image build -t localhost/my-image:functional-205266 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-205266 image build -t localhost/my-image:functional-205266 testdata/build --alsologtostderr: (3.529843196s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-205266 image build -t localhost/my-image:functional-205266 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 84f12337bb2
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-205266
--> 75d61f40f8c
Successfully tagged localhost/my-image:functional-205266
75d61f40f8c3560349a91bb535eb4d11f568ce3a1ec1cea1a8f251951d5006ec
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-205266 image build -t localhost/my-image:functional-205266 testdata/build --alsologtostderr:
I1206 10:36:33.409002  392760 out.go:360] Setting OutFile to fd 1 ...
I1206 10:36:33.409784  392760 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:36:33.409826  392760 out.go:374] Setting ErrFile to fd 2...
I1206 10:36:33.409850  392760 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 10:36:33.410174  392760 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
I1206 10:36:33.410960  392760 config.go:182] Loaded profile config "functional-205266": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 10:36:33.411741  392760 config.go:182] Loaded profile config "functional-205266": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1206 10:36:33.412345  392760 cli_runner.go:164] Run: docker container inspect functional-205266 --format={{.State.Status}}
I1206 10:36:33.434083  392760 ssh_runner.go:195] Run: systemctl --version
I1206 10:36:33.434146  392760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-205266
I1206 10:36:33.461047  392760 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-205266/id_rsa Username:docker}
I1206 10:36:33.571647  392760 build_images.go:162] Building image from path: /tmp/build.1786745579.tar
I1206 10:36:33.571725  392760 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1206 10:36:33.583234  392760 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1786745579.tar
I1206 10:36:33.591754  392760 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1786745579.tar: stat -c "%s %y" /var/lib/minikube/build/build.1786745579.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1786745579.tar': No such file or directory
I1206 10:36:33.591791  392760 ssh_runner.go:362] scp /tmp/build.1786745579.tar --> /var/lib/minikube/build/build.1786745579.tar (3072 bytes)
I1206 10:36:33.615099  392760 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1786745579
I1206 10:36:33.628599  392760 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1786745579 -xf /var/lib/minikube/build/build.1786745579.tar
I1206 10:36:33.638723  392760 crio.go:315] Building image: /var/lib/minikube/build/build.1786745579
I1206 10:36:33.638796  392760 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-205266 /var/lib/minikube/build/build.1786745579 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1206 10:36:36.843316  392760 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-205266 /var/lib/minikube/build/build.1786745579 --cgroup-manager=cgroupfs: (3.204497128s)
I1206 10:36:36.843418  392760 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1786745579
I1206 10:36:36.851454  392760 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1786745579.tar
I1206 10:36:36.859987  392760 build_images.go:218] Built localhost/my-image:functional-205266 from /tmp/build.1786745579.tar
I1206 10:36:36.860020  392760 build_images.go:134] succeeded building to: functional-205266
I1206 10:36:36.860026  392760 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-205266
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 image load --daemon kicbase/echo-server:functional-205266 --alsologtostderr
E1206 10:36:25.679043  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-205266 image load --daemon kicbase/echo-server:functional-205266 --alsologtostderr: (1.429038229s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-205266 /tmp/TestFunctionalparallelMountCmdVerifyCleanup103987136/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-205266 /tmp/TestFunctionalparallelMountCmdVerifyCleanup103987136/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-205266 /tmp/TestFunctionalparallelMountCmdVerifyCleanup103987136/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-205266 ssh "findmnt -T" /mount1: exit status 1 (882.82024ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 10:36:26.748223  364855 retry.go:31] will retry after 595.829474ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-205266 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-205266 /tmp/TestFunctionalparallelMountCmdVerifyCleanup103987136/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-205266 /tmp/TestFunctionalparallelMountCmdVerifyCleanup103987136/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-205266 /tmp/TestFunctionalparallelMountCmdVerifyCleanup103987136/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 image load --daemon kicbase/echo-server:functional-205266 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-205266
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 image load --daemon kicbase/echo-server:functional-205266 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 image save kicbase/echo-server:functional-205266 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 image rm kicbase/echo-server:functional-205266 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-205266
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-205266 image save --daemon kicbase/echo-server:functional-205266 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-205266
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.50s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-205266
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-205266
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-205266
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22047-362985/.minikube/files/etc/test/nested/copy/364855/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.59s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-196950 cache add registry.k8s.io/pause:3.1: (1.23916055s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-196950 cache add registry.k8s.io/pause:3.3: (1.22574195s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-196950 cache add registry.k8s.io/pause:latest: (1.128863594s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.59s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-196950 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach3492973301/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 cache add minikube-local-cache-test:functional-196950
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 cache delete minikube-local-cache-test:functional-196950
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-196950
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.96s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-196950 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (302.85689ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.96s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.98s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 logs
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.98s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs274505424/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-196950 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs274505424/001/logs.txt: (1.137344532s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-196950 config get cpus: exit status 14 (90.950699ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-196950 config get cpus: exit status 14 (64.250555ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-196950 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-196950 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (189.104972ms)

                                                
                                                
-- stdout --
	* [functional-196950] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-362985/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-362985/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 11:05:41.490180  423853 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:05:41.490428  423853 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:05:41.490460  423853 out.go:374] Setting ErrFile to fd 2...
	I1206 11:05:41.490482  423853 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:05:41.490770  423853 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 11:05:41.491213  423853 out.go:368] Setting JSON to false
	I1206 11:05:41.492218  423853 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10093,"bootTime":1765009049,"procs":164,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 11:05:41.492327  423853 start.go:143] virtualization:  
	I1206 11:05:41.497980  423853 out.go:179] * [functional-196950] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1206 11:05:41.501094  423853 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 11:05:41.501200  423853 notify.go:221] Checking for updates...
	I1206 11:05:41.504628  423853 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 11:05:41.507573  423853 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 11:05:41.510528  423853 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-362985/.minikube
	I1206 11:05:41.513473  423853 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 11:05:41.516598  423853 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 11:05:41.520081  423853 config.go:182] Loaded profile config "functional-196950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 11:05:41.520684  423853 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 11:05:41.549110  423853 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 11:05:41.549230  423853 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:05:41.607312  423853 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:05:41.596618465 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:05:41.607458  423853 docker.go:319] overlay module found
	I1206 11:05:41.610611  423853 out.go:179] * Using the docker driver based on existing profile
	I1206 11:05:41.613496  423853 start.go:309] selected driver: docker
	I1206 11:05:41.613523  423853 start.go:927] validating driver "docker" against &{Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:05:41.613627  423853 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 11:05:41.617310  423853 out.go:203] 
	W1206 11:05:41.620261  423853 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1206 11:05:41.623255  423853 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-196950 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-196950 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-196950 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (245.256562ms)

                                                
                                                
-- stdout --
	* [functional-196950] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-362985/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-362985/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 11:05:41.957197  423975 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:05:41.957372  423975 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:05:41.957399  423975 out.go:374] Setting ErrFile to fd 2...
	I1206 11:05:41.957418  423975 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:05:41.957813  423975 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 11:05:41.958250  423975 out.go:368] Setting JSON to false
	I1206 11:05:41.959202  423975 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10093,"bootTime":1765009049,"procs":164,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1206 11:05:41.959273  423975 start.go:143] virtualization:  
	I1206 11:05:41.962659  423975 out.go:179] * [functional-196950] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1206 11:05:41.965760  423975 out.go:179]   - MINIKUBE_LOCATION=22047
	I1206 11:05:41.965811  423975 notify.go:221] Checking for updates...
	I1206 11:05:41.971719  423975 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1206 11:05:41.974704  423975 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22047-362985/kubeconfig
	I1206 11:05:41.977639  423975 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-362985/.minikube
	I1206 11:05:41.980611  423975 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1206 11:05:41.983549  423975 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1206 11:05:41.986961  423975 config.go:182] Loaded profile config "functional-196950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1206 11:05:41.987685  423975 driver.go:422] Setting default libvirt URI to qemu:///system
	I1206 11:05:42.035258  423975 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1206 11:05:42.035460  423975 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:05:42.103002  423975 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-06 11:05:42.09106596 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:05:42.103130  423975 docker.go:319] overlay module found
	I1206 11:05:42.106615  423975 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1206 11:05:42.118016  423975 start.go:309] selected driver: docker
	I1206 11:05:42.118069  423975 start.go:927] validating driver "docker" against &{Name:functional-196950 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1764843390-22032@sha256:e0549ab5b944401a6b1b03cfbd02cd8e1f1ac2f1cf44298eab0c6846e4375164 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-196950 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1206 11:05:42.118200  423975 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1206 11:05:42.128616  423975 out.go:203] 
	W1206 11:05:42.139461  423975 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1206 11:05:42.143337  423975 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.68s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.68s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 ssh -n functional-196950 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 cp functional-196950:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp3781777450/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 ssh -n functional-196950 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 ssh -n functional-196950 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/364855/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 ssh "sudo cat /etc/test/nested/copy/364855/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (2.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/364855.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 ssh "sudo cat /etc/ssl/certs/364855.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/364855.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 ssh "sudo cat /usr/share/ca-certificates/364855.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3648552.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 ssh "sudo cat /etc/ssl/certs/3648552.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3648552.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 ssh "sudo cat /usr/share/ca-certificates/3648552.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (2.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.77s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-196950 ssh "sudo systemctl is-active docker": exit status 1 (371.708067ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-196950 ssh "sudo systemctl is-active containerd": exit status 1 (398.743385ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.77s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 image ls --format short --alsologtostderr
E1206 11:05:45.365104  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-205266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-196950 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
localhost/minikube-local-cache-test:functional-196950
localhost/kicbase/echo-server:functional-196950
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-196950 image ls --format short --alsologtostderr:
I1206 11:05:45.140958  424616 out.go:360] Setting OutFile to fd 1 ...
I1206 11:05:45.141112  424616 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 11:05:45.141125  424616 out.go:374] Setting ErrFile to fd 2...
I1206 11:05:45.141130  424616 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 11:05:45.141679  424616 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
I1206 11:05:45.142512  424616 config.go:182] Loaded profile config "functional-196950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 11:05:45.142693  424616 config.go:182] Loaded profile config "functional-196950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 11:05:45.143270  424616 cli_runner.go:164] Run: docker container inspect functional-196950 --format={{.State.Status}}
I1206 11:05:45.184073  424616 ssh_runner.go:195] Run: systemctl --version
I1206 11:05:45.184580  424616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
I1206 11:05:45.229730  424616 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
I1206 11:05:45.368609  424616 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-196950 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ localhost/kicbase/echo-server           │ functional-196950  │ ce2d2cda2d858 │ 4.79MB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0     │ 68b5f775f1876 │ 72.2MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ localhost/my-image                      │ functional-196950  │ 307522c4cda46 │ 1.64MB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ e08f4d9d2e6ed │ 74.5MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0     │ 16378741539f1 │ 49.8MB │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ 71a676dd070f4 │ 1.63MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ 2c5f0dedd21c2 │ 60.9MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0     │ ccd634d9bcc36 │ 85MB   │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0     │ 404c2e1286177 │ 74.1MB │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ localhost/minikube-local-cache-test     │ functional-196950  │ 6044d1a5e76fc │ 3.33kB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-196950 image ls --format table --alsologtostderr:
I1206 11:05:49.682617  425114 out.go:360] Setting OutFile to fd 1 ...
I1206 11:05:49.682735  425114 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 11:05:49.682744  425114 out.go:374] Setting ErrFile to fd 2...
I1206 11:05:49.682749  425114 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 11:05:49.683127  425114 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
I1206 11:05:49.684190  425114 config.go:182] Loaded profile config "functional-196950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 11:05:49.684332  425114 config.go:182] Loaded profile config "functional-196950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 11:05:49.684832  425114 cli_runner.go:164] Run: docker container inspect functional-196950 --format={{.State.Status}}
I1206 11:05:49.703003  425114 ssh_runner.go:195] Run: systemctl --version
I1206 11:05:49.703071  425114 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
I1206 11:05:49.721441  425114 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
I1206 11:05:49.826124  425114 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-196950 image ls --format json --alsologtostderr:
[{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-196950"],"size":"4788229"},{"id":"6044d1a5e76fc3db9ee5a0298b4e62a5c2d3a619a9ff917ee277cefa0e49b4a0","repoDigests":["localhost/minikube-local-cache-test@sha256:5b377d95c8e53c387b469c0bb26f0a0db0d7b0a8046aed8bc6ba7cba62805742"],"repoTags":["localhost/minikube-local-cache-test:functional-196950"],"size":"3328"},{"id":"307522c4cda46c37c0430ad005ccee1c66f23ac9edbf3f86604ee6141b740d11","repoDigests":["localhost/my-image@sha256:eda5ab20323e9f4891eb3017b1fbef6f6c9e3d61e642300a17fef6e68a50a55e"],"repoTags":["localhost/my-image:functional-196950"],"size":"1640791"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry
.k8s.io/pause:3.1"],"size":"528622"},{"id":"16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6","registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"49822549"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:0f
87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"60857170"},{"id":"68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d","registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"72170325"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1
c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"6545ac336bf919624baff85aaf703cb64b4540d3a1f531cde24e6aee9f6a13f8","repoDigests":["docker.io/library/b3ec2b10b4b78b211abeac8f900ead7da3b369c867c8d727ccf363e8d8890198-tmp@sha256:57bb61d814163f8a0395ac42f0e71bd89cceff76e07b8bb7fabf3ef93f785d43"],"repoTags":[],"size":"1638179"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"e08f4d9d2e6ede8185064c13b41f8eeee
95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6","registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"74491780"},{"id":"ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58","registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"84949999"},{"id":"404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904","repoDigests":["registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478","registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"],"repoTags":["regist
ry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"74106775"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-196950 image ls --format json --alsologtostderr:
I1206 11:05:49.445963  425076 out.go:360] Setting OutFile to fd 1 ...
I1206 11:05:49.446080  425076 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 11:05:49.446089  425076 out.go:374] Setting ErrFile to fd 2...
I1206 11:05:49.446095  425076 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 11:05:49.446377  425076 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
I1206 11:05:49.447037  425076 config.go:182] Loaded profile config "functional-196950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 11:05:49.447177  425076 config.go:182] Loaded profile config "functional-196950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 11:05:49.447794  425076 cli_runner.go:164] Run: docker container inspect functional-196950 --format={{.State.Status}}
I1206 11:05:49.465675  425076 ssh_runner.go:195] Run: systemctl --version
I1206 11:05:49.465733  425076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
I1206 11:05:49.484024  425076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
I1206 11:05:49.589963  425076 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-196950 image ls --format yaml --alsologtostderr:
- id: 68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
- registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "72170325"
- id: 404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904
repoDigests:
- registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "74106775"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "60857170"
- id: 16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
- registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "49822549"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-196950
size: "4788229"
- id: 6044d1a5e76fc3db9ee5a0298b4e62a5c2d3a619a9ff917ee277cefa0e49b4a0
repoDigests:
- localhost/minikube-local-cache-test@sha256:5b377d95c8e53c387b469c0bb26f0a0db0d7b0a8046aed8bc6ba7cba62805742
repoTags:
- localhost/minikube-local-cache-test:functional-196950
size: "3328"
- id: e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
- registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "74491780"
- id: ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
- registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "84949999"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-196950 image ls --format yaml --alsologtostderr:
I1206 11:05:45.472627  424653 out.go:360] Setting OutFile to fd 1 ...
I1206 11:05:45.472852  424653 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 11:05:45.472879  424653 out.go:374] Setting ErrFile to fd 2...
I1206 11:05:45.472901  424653 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 11:05:45.473207  424653 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
I1206 11:05:45.473887  424653 config.go:182] Loaded profile config "functional-196950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 11:05:45.474066  424653 config.go:182] Loaded profile config "functional-196950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 11:05:45.474629  424653 cli_runner.go:164] Run: docker container inspect functional-196950 --format={{.State.Status}}
I1206 11:05:45.499652  424653 ssh_runner.go:195] Run: systemctl --version
I1206 11:05:45.499704  424653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
I1206 11:05:45.525563  424653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
I1206 11:05:45.634433  424653 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-196950 ssh pgrep buildkitd: exit status 1 (276.522105ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 image build -t localhost/my-image:functional-196950 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-196950 image build -t localhost/my-image:functional-196950 testdata/build --alsologtostderr: (3.218014177s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-196950 image build -t localhost/my-image:functional-196950 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 6545ac336bf
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-196950
--> 307522c4cda
Successfully tagged localhost/my-image:functional-196950
307522c4cda46c37c0430ad005ccee1c66f23ac9edbf3f86604ee6141b740d11
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-196950 image build -t localhost/my-image:functional-196950 testdata/build --alsologtostderr:
I1206 11:05:45.995601  424759 out.go:360] Setting OutFile to fd 1 ...
I1206 11:05:45.995739  424759 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 11:05:45.995752  424759 out.go:374] Setting ErrFile to fd 2...
I1206 11:05:45.995771  424759 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1206 11:05:45.996050  424759 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
I1206 11:05:45.996725  424759 config.go:182] Loaded profile config "functional-196950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 11:05:45.997408  424759 config.go:182] Loaded profile config "functional-196950": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1206 11:05:45.998012  424759 cli_runner.go:164] Run: docker container inspect functional-196950 --format={{.State.Status}}
I1206 11:05:46.019182  424759 ssh_runner.go:195] Run: systemctl --version
I1206 11:05:46.019246  424759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-196950
I1206 11:05:46.037607  424759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/functional-196950/id_rsa Username:docker}
I1206 11:05:46.142062  424759 build_images.go:162] Building image from path: /tmp/build.2333644709.tar
I1206 11:05:46.142133  424759 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1206 11:05:46.150034  424759 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2333644709.tar
I1206 11:05:46.153684  424759 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2333644709.tar: stat -c "%s %y" /var/lib/minikube/build/build.2333644709.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2333644709.tar': No such file or directory
I1206 11:05:46.153712  424759 ssh_runner.go:362] scp /tmp/build.2333644709.tar --> /var/lib/minikube/build/build.2333644709.tar (3072 bytes)
I1206 11:05:46.171781  424759 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2333644709
I1206 11:05:46.180062  424759 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2333644709 -xf /var/lib/minikube/build/build.2333644709.tar
I1206 11:05:46.188320  424759 crio.go:315] Building image: /var/lib/minikube/build/build.2333644709
I1206 11:05:46.188406  424759 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-196950 /var/lib/minikube/build/build.2333644709 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1206 11:05:49.131250  424759 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-196950 /var/lib/minikube/build/build.2333644709 --cgroup-manager=cgroupfs: (2.942801425s)
I1206 11:05:49.131325  424759 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2333644709
I1206 11:05:49.145085  424759 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2333644709.tar
I1206 11:05:49.152913  424759 build_images.go:218] Built localhost/my-image:functional-196950 from /tmp/build.2333644709.tar
I1206 11:05:49.152957  424759 build_images.go:134] succeeded building to: functional-196950
I1206 11:05:49.152964  424759 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-196950
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.6s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 image load --daemon kicbase/echo-server:functional-196950 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-196950 image load --daemon kicbase/echo-server:functional-196950 --alsologtostderr: (1.337791051s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.60s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 image load --daemon kicbase/echo-server:functional-196950 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (1.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-196950
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 image load --daemon kicbase/echo-server:functional-196950 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.44s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 image save kicbase/echo-server:functional-196950 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.44s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.68s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 image rm kicbase/echo-server:functional-196950 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.68s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.86s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.86s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-196950
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 image save --daemon kicbase/echo-server:functional-196950 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-196950
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-196950 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-196950 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "331.357491ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "55.637805ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "373.108859ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "58.271971ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.94s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-196950 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1854477046/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-196950 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (363.158794ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 11:05:38.049008  364855 retry.go:31] will retry after 494.985794ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-196950 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1854477046/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-196950 ssh "sudo umount -f /mount-9p": exit status 1 (272.990792ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-196950 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-196950 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1854477046/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (1.94s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.82s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-196950 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo571722621/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-196950 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo571722621/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-196950 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo571722621/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-196950 ssh "findmnt -T" /mount1: exit status 1 (547.720249ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1206 11:05:40.170860  364855 retry.go:31] will retry after 360.114602ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-196950 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-196950 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-196950 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo571722621/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-196950 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo571722621/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-196950 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo571722621/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (1.82s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-196950
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-196950
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-196950
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (209.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1206 11:08:41.813686  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:08:55.065245  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:08:55.071729  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:08:55.083093  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:08:55.104475  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:08:55.145855  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:08:55.227315  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:08:55.388984  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:08:55.710611  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:08:56.352681  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:08:57.634036  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:09:00.203290  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:09:05.325263  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:09:15.566749  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:09:36.048820  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:10:17.011205  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:10:45.365015  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-205266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-037641 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m28.162794021s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (209.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-037641 kubectl -- rollout status deployment/busybox: (5.295067645s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 kubectl -- exec busybox-7b57f96db7-4bwkj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 kubectl -- exec busybox-7b57f96db7-6l544 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 kubectl -- exec busybox-7b57f96db7-dzx7v -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 kubectl -- exec busybox-7b57f96db7-4bwkj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 kubectl -- exec busybox-7b57f96db7-6l544 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 kubectl -- exec busybox-7b57f96db7-dzx7v -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 kubectl -- exec busybox-7b57f96db7-4bwkj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 kubectl -- exec busybox-7b57f96db7-6l544 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 kubectl -- exec busybox-7b57f96db7-dzx7v -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 kubectl -- exec busybox-7b57f96db7-4bwkj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
E1206 11:11:38.935643  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 kubectl -- exec busybox-7b57f96db7-4bwkj -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 kubectl -- exec busybox-7b57f96db7-6l544 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 kubectl -- exec busybox-7b57f96db7-6l544 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 kubectl -- exec busybox-7b57f96db7-dzx7v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 kubectl -- exec busybox-7b57f96db7-dzx7v -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-037641 node add --alsologtostderr -v 5: (58.320934363s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-037641 status --alsologtostderr -v 5: (1.074525299s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-037641 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.132874871s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (21.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-037641 status --output json --alsologtostderr -v 5: (1.120267231s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 cp testdata/cp-test.txt ha-037641:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 ssh -n ha-037641 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 cp ha-037641:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile293741304/001/cp-test_ha-037641.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 ssh -n ha-037641 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 cp ha-037641:/home/docker/cp-test.txt ha-037641-m02:/home/docker/cp-test_ha-037641_ha-037641-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 ssh -n ha-037641 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 ssh -n ha-037641-m02 "sudo cat /home/docker/cp-test_ha-037641_ha-037641-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 cp ha-037641:/home/docker/cp-test.txt ha-037641-m03:/home/docker/cp-test_ha-037641_ha-037641-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 ssh -n ha-037641 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 ssh -n ha-037641-m03 "sudo cat /home/docker/cp-test_ha-037641_ha-037641-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 cp ha-037641:/home/docker/cp-test.txt ha-037641-m04:/home/docker/cp-test_ha-037641_ha-037641-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 ssh -n ha-037641 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 ssh -n ha-037641-m04 "sudo cat /home/docker/cp-test_ha-037641_ha-037641-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 cp testdata/cp-test.txt ha-037641-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 ssh -n ha-037641-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 cp ha-037641-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile293741304/001/cp-test_ha-037641-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 ssh -n ha-037641-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 cp ha-037641-m02:/home/docker/cp-test.txt ha-037641:/home/docker/cp-test_ha-037641-m02_ha-037641.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 ssh -n ha-037641-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 ssh -n ha-037641 "sudo cat /home/docker/cp-test_ha-037641-m02_ha-037641.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 cp ha-037641-m02:/home/docker/cp-test.txt ha-037641-m03:/home/docker/cp-test_ha-037641-m02_ha-037641-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 ssh -n ha-037641-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 ssh -n ha-037641-m03 "sudo cat /home/docker/cp-test_ha-037641-m02_ha-037641-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 cp ha-037641-m02:/home/docker/cp-test.txt ha-037641-m04:/home/docker/cp-test_ha-037641-m02_ha-037641-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 ssh -n ha-037641-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 ssh -n ha-037641-m04 "sudo cat /home/docker/cp-test_ha-037641-m02_ha-037641-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 cp testdata/cp-test.txt ha-037641-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 ssh -n ha-037641-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 cp ha-037641-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile293741304/001/cp-test_ha-037641-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 ssh -n ha-037641-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 cp ha-037641-m03:/home/docker/cp-test.txt ha-037641:/home/docker/cp-test_ha-037641-m03_ha-037641.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 ssh -n ha-037641-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 ssh -n ha-037641 "sudo cat /home/docker/cp-test_ha-037641-m03_ha-037641.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 cp ha-037641-m03:/home/docker/cp-test.txt ha-037641-m02:/home/docker/cp-test_ha-037641-m03_ha-037641-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 ssh -n ha-037641-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 ssh -n ha-037641-m02 "sudo cat /home/docker/cp-test_ha-037641-m03_ha-037641-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 cp ha-037641-m03:/home/docker/cp-test.txt ha-037641-m04:/home/docker/cp-test_ha-037641-m03_ha-037641-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 ssh -n ha-037641-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 ssh -n ha-037641-m04 "sudo cat /home/docker/cp-test_ha-037641-m03_ha-037641-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 cp testdata/cp-test.txt ha-037641-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 ssh -n ha-037641-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 cp ha-037641-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile293741304/001/cp-test_ha-037641-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 ssh -n ha-037641-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 cp ha-037641-m04:/home/docker/cp-test.txt ha-037641:/home/docker/cp-test_ha-037641-m04_ha-037641.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 ssh -n ha-037641-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 ssh -n ha-037641 "sudo cat /home/docker/cp-test_ha-037641-m04_ha-037641.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 cp ha-037641-m04:/home/docker/cp-test.txt ha-037641-m02:/home/docker/cp-test_ha-037641-m04_ha-037641-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 ssh -n ha-037641-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 ssh -n ha-037641-m02 "sudo cat /home/docker/cp-test_ha-037641-m04_ha-037641-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 cp ha-037641-m04:/home/docker/cp-test.txt ha-037641-m03:/home/docker/cp-test_ha-037641-m04_ha-037641-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 ssh -n ha-037641-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 ssh -n ha-037641-m03 "sudo cat /home/docker/cp-test_ha-037641-m04_ha-037641-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (21.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-037641 node stop m02 --alsologtostderr -v 5: (12.151416552s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-037641 status --alsologtostderr -v 5: exit status 7 (809.614624ms)

                                                
                                                
-- stdout --
	ha-037641
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-037641-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-037641-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-037641-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 11:13:14.209095  441021 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:13:14.209288  441021 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:13:14.209320  441021 out.go:374] Setting ErrFile to fd 2...
	I1206 11:13:14.209342  441021 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:13:14.209618  441021 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 11:13:14.209838  441021 out.go:368] Setting JSON to false
	I1206 11:13:14.209905  441021 mustload.go:66] Loading cluster: ha-037641
	I1206 11:13:14.209971  441021 notify.go:221] Checking for updates...
	I1206 11:13:14.211031  441021 config.go:182] Loaded profile config "ha-037641": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 11:13:14.211076  441021 status.go:174] checking status of ha-037641 ...
	I1206 11:13:14.211699  441021 cli_runner.go:164] Run: docker container inspect ha-037641 --format={{.State.Status}}
	I1206 11:13:14.232504  441021 status.go:371] ha-037641 host status = "Running" (err=<nil>)
	I1206 11:13:14.232528  441021 host.go:66] Checking if "ha-037641" exists ...
	I1206 11:13:14.232845  441021 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-037641
	I1206 11:13:14.262715  441021 host.go:66] Checking if "ha-037641" exists ...
	I1206 11:13:14.263016  441021 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 11:13:14.263060  441021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-037641
	I1206 11:13:14.283098  441021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/ha-037641/id_rsa Username:docker}
	I1206 11:13:14.388994  441021 ssh_runner.go:195] Run: systemctl --version
	I1206 11:13:14.395842  441021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 11:13:14.409730  441021 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:13:14.480950  441021 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-06 11:13:14.469917565 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:13:14.481489  441021 kubeconfig.go:125] found "ha-037641" server: "https://192.168.49.254:8443"
	I1206 11:13:14.481526  441021 api_server.go:166] Checking apiserver status ...
	I1206 11:13:14.481584  441021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:13:14.500410  441021 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1255/cgroup
	I1206 11:13:14.510510  441021 api_server.go:182] apiserver freezer: "5:freezer:/docker/31456aed6643c28e35a904a947c38baa3d7c4adc2dd7985bd18d58fff1770a4f/crio/crio-3b62c0c4073300835d34c8348c1b8dc16a087273dcbb3b961319da5db14e9d60"
	I1206 11:13:14.510593  441021 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/31456aed6643c28e35a904a947c38baa3d7c4adc2dd7985bd18d58fff1770a4f/crio/crio-3b62c0c4073300835d34c8348c1b8dc16a087273dcbb3b961319da5db14e9d60/freezer.state
	I1206 11:13:14.518697  441021 api_server.go:204] freezer state: "THAWED"
	I1206 11:13:14.518722  441021 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1206 11:13:14.527699  441021 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1206 11:13:14.527733  441021 status.go:463] ha-037641 apiserver status = Running (err=<nil>)
	I1206 11:13:14.527748  441021 status.go:176] ha-037641 status: &{Name:ha-037641 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 11:13:14.527765  441021 status.go:174] checking status of ha-037641-m02 ...
	I1206 11:13:14.528079  441021 cli_runner.go:164] Run: docker container inspect ha-037641-m02 --format={{.State.Status}}
	I1206 11:13:14.548931  441021 status.go:371] ha-037641-m02 host status = "Stopped" (err=<nil>)
	I1206 11:13:14.548954  441021 status.go:384] host is not running, skipping remaining checks
	I1206 11:13:14.548981  441021 status.go:176] ha-037641-m02 status: &{Name:ha-037641-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 11:13:14.549010  441021 status.go:174] checking status of ha-037641-m03 ...
	I1206 11:13:14.549343  441021 cli_runner.go:164] Run: docker container inspect ha-037641-m03 --format={{.State.Status}}
	I1206 11:13:14.567865  441021 status.go:371] ha-037641-m03 host status = "Running" (err=<nil>)
	I1206 11:13:14.567894  441021 host.go:66] Checking if "ha-037641-m03" exists ...
	I1206 11:13:14.568218  441021 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-037641-m03
	I1206 11:13:14.586559  441021 host.go:66] Checking if "ha-037641-m03" exists ...
	I1206 11:13:14.586870  441021 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 11:13:14.586915  441021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-037641-m03
	I1206 11:13:14.604811  441021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/ha-037641-m03/id_rsa Username:docker}
	I1206 11:13:14.713561  441021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 11:13:14.728297  441021 kubeconfig.go:125] found "ha-037641" server: "https://192.168.49.254:8443"
	I1206 11:13:14.728329  441021 api_server.go:166] Checking apiserver status ...
	I1206 11:13:14.728373  441021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:13:14.740190  441021 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1202/cgroup
	I1206 11:13:14.750280  441021 api_server.go:182] apiserver freezer: "5:freezer:/docker/a0a83c6cff233801e16291706a1df6eac277e7d4f99a83041f69f49ac7283481/crio/crio-8b2006cdaba30a1d354ffdf1ff4c5c6ddc64357d046f9d1744bd8efed430aaf5"
	I1206 11:13:14.750367  441021 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a0a83c6cff233801e16291706a1df6eac277e7d4f99a83041f69f49ac7283481/crio/crio-8b2006cdaba30a1d354ffdf1ff4c5c6ddc64357d046f9d1744bd8efed430aaf5/freezer.state
	I1206 11:13:14.758685  441021 api_server.go:204] freezer state: "THAWED"
	I1206 11:13:14.758714  441021 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1206 11:13:14.767305  441021 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1206 11:13:14.767336  441021 status.go:463] ha-037641-m03 apiserver status = Running (err=<nil>)
	I1206 11:13:14.767345  441021 status.go:176] ha-037641-m03 status: &{Name:ha-037641-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 11:13:14.767413  441021 status.go:174] checking status of ha-037641-m04 ...
	I1206 11:13:14.767747  441021 cli_runner.go:164] Run: docker container inspect ha-037641-m04 --format={{.State.Status}}
	I1206 11:13:14.794867  441021 status.go:371] ha-037641-m04 host status = "Running" (err=<nil>)
	I1206 11:13:14.794895  441021 host.go:66] Checking if "ha-037641-m04" exists ...
	I1206 11:13:14.795224  441021 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-037641-m04
	I1206 11:13:14.814093  441021 host.go:66] Checking if "ha-037641-m04" exists ...
	I1206 11:13:14.814462  441021 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 11:13:14.814509  441021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-037641-m04
	I1206 11:13:14.831927  441021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/ha-037641-m04/id_rsa Username:docker}
	I1206 11:13:14.940974  441021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 11:13:14.955370  441021 status.go:176] ha-037641-m04 status: &{Name:ha-037641-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (21.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-037641 node start m02 --alsologtostderr -v 5: (19.790658186s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-037641 status --alsologtostderr -v 5: (1.22011731s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (21.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.13176086s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (118.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 stop --alsologtostderr -v 5
E1206 11:13:41.814674  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:13:48.437283  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-205266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:13:55.064921  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-037641 stop --alsologtostderr -v 5: (26.659549498s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 start --wait true --alsologtostderr -v 5
E1206 11:14:22.777533  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-037641 start --wait true --alsologtostderr -v 5: (1m31.319423318s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (118.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 node delete m03 --alsologtostderr -v 5
E1206 11:15:45.364461  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-205266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-037641 node delete m03 --alsologtostderr -v 5: (11.285909617s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-037641 stop --alsologtostderr -v 5: (36.086530227s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-037641 status --alsologtostderr -v 5: exit status 7 (122.060452ms)

                                                
                                                
-- stdout --
	ha-037641
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-037641-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-037641-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 11:16:25.442848  452880 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:16:25.442983  452880 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:16:25.442996  452880 out.go:374] Setting ErrFile to fd 2...
	I1206 11:16:25.443002  452880 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:16:25.443362  452880 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 11:16:25.443842  452880 out.go:368] Setting JSON to false
	I1206 11:16:25.443908  452880 mustload.go:66] Loading cluster: ha-037641
	I1206 11:16:25.443996  452880 notify.go:221] Checking for updates...
	I1206 11:16:25.444439  452880 config.go:182] Loaded profile config "ha-037641": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 11:16:25.444482  452880 status.go:174] checking status of ha-037641 ...
	I1206 11:16:25.445062  452880 cli_runner.go:164] Run: docker container inspect ha-037641 --format={{.State.Status}}
	I1206 11:16:25.470485  452880 status.go:371] ha-037641 host status = "Stopped" (err=<nil>)
	I1206 11:16:25.470510  452880 status.go:384] host is not running, skipping remaining checks
	I1206 11:16:25.470518  452880 status.go:176] ha-037641 status: &{Name:ha-037641 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 11:16:25.470564  452880 status.go:174] checking status of ha-037641-m02 ...
	I1206 11:16:25.470878  452880 cli_runner.go:164] Run: docker container inspect ha-037641-m02 --format={{.State.Status}}
	I1206 11:16:25.495224  452880 status.go:371] ha-037641-m02 host status = "Stopped" (err=<nil>)
	I1206 11:16:25.495250  452880 status.go:384] host is not running, skipping remaining checks
	I1206 11:16:25.495258  452880 status.go:176] ha-037641-m02 status: &{Name:ha-037641-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 11:16:25.495277  452880 status.go:174] checking status of ha-037641-m04 ...
	I1206 11:16:25.495629  452880 cli_runner.go:164] Run: docker container inspect ha-037641-m04 --format={{.State.Status}}
	I1206 11:16:25.514067  452880 status.go:371] ha-037641-m04 host status = "Stopped" (err=<nil>)
	I1206 11:16:25.514089  452880 status.go:384] host is not running, skipping remaining checks
	I1206 11:16:25.514096  452880 status.go:176] ha-037641-m04 status: &{Name:ha-037641-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (94.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-037641 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m33.044329985s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (94.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.128303935s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (81.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 node add --control-plane --alsologtostderr -v 5
E1206 11:18:41.813666  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:18:55.065281  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-037641 node add --control-plane --alsologtostderr -v 5: (1m20.745827833s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-037641 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-037641 status --alsologtostderr -v 5: (1.123274272s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (81.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.204290398s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.20s)

                                                
                                    
x
+
TestJSONOutput/start/Command (78.8s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-250118 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1206 11:20:45.364333  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-205266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-250118 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m18.790159756s)
--- PASS: TestJSONOutput/start/Command (78.80s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.89s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-250118 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-250118 --output=json --user=testUser: (5.89191578s)
--- PASS: TestJSONOutput/stop/Command (5.89s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-114660 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-114660 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (100.83112ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"59cb9b00-ce52-45be-ab45-1b5a5375b16b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-114660] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"02d7b337-ad39-430a-b7ae-5a5093e5bad1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22047"}}
	{"specversion":"1.0","id":"b3e75326-e977-455f-ab51-53b05615d41c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"199ef0c0-d438-4280-921e-54d566547f2d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22047-362985/kubeconfig"}}
	{"specversion":"1.0","id":"fbbac8ee-a3dc-4cef-a20b-6303ac79342b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-362985/.minikube"}}
	{"specversion":"1.0","id":"0c9f9b56-e33c-4184-9534-6032f9de477c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"5489269f-6f3d-4b0e-b138-2de2b825374a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"750259f0-7f2f-40dd-908e-56b2f333ba2d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-114660" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-114660
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (39.73s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-849948 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-849948 --network=: (37.315138981s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-849948" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-849948
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-849948: (2.380759461s)
--- PASS: TestKicCustomNetwork/create_custom_network (39.73s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.66s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-237043 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-237043 --network=bridge: (34.472916421s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-237043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-237043
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-237043: (2.169852069s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.66s)

                                                
                                    
x
+
TestKicExistingNetwork (40.8s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1206 11:22:22.794295  364855 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1206 11:22:22.811133  364855 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1206 11:22:22.811224  364855 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1206 11:22:22.811243  364855 cli_runner.go:164] Run: docker network inspect existing-network
W1206 11:22:22.827584  364855 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1206 11:22:22.827618  364855 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1206 11:22:22.827634  364855 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1206 11:22:22.827735  364855 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1206 11:22:22.849395  364855 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a2e57973c06f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ca:70:7c:95:fc:30} reservation:<nil>}
I1206 11:22:22.850799  364855 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018a8e50}
I1206 11:22:22.850885  364855 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1206 11:22:22.850944  364855 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1206 11:22:22.914010  364855 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-042214 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-042214 --network=existing-network: (38.418809818s)
helpers_test.go:175: Cleaning up "existing-network-042214" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-042214
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-042214: (2.224212008s)
I1206 11:23:03.573707  364855 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (40.80s)

                                                
                                    
x
+
TestKicCustomSubnet (33.79s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-809692 --subnet=192.168.60.0/24
E1206 11:23:24.888857  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-809692 --subnet=192.168.60.0/24: (31.562114465s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-809692 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-809692" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-809692
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-809692: (2.202886177s)
--- PASS: TestKicCustomSubnet (33.79s)

                                                
                                    
x
+
TestKicStaticIP (37.83s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-509130 --static-ip=192.168.200.200
E1206 11:23:41.814032  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:23:55.065606  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-509130 --static-ip=192.168.200.200: (35.326776964s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-509130 ip
helpers_test.go:175: Cleaning up "static-ip-509130" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-509130
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-509130: (2.337601674s)
--- PASS: TestKicStaticIP (37.83s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (70.57s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-217261 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-217261 --driver=docker  --container-runtime=crio: (31.530557937s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-219681 --driver=docker  --container-runtime=crio
E1206 11:25:18.139592  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-219681 --driver=docker  --container-runtime=crio: (33.206620095s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-217261
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-219681
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-219681" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-219681
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-219681: (2.161318333s)
helpers_test.go:175: Cleaning up "first-217261" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-217261
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-217261: (2.10569588s)
--- PASS: TestMinikubeProfile (70.57s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.21s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-964425 --memory=3072 --mount-string /tmp/TestMountStartserial1371560443/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-964425 --memory=3072 --mount-string /tmp/TestMountStartserial1371560443/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.207972196s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.21s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-964425 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.87s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-966649 --memory=3072 --mount-string /tmp/TestMountStartserial1371560443/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-966649 --memory=3072 --mount-string /tmp/TestMountStartserial1371560443/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.870510894s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-966649 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.74s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-964425 --alsologtostderr -v=5
E1206 11:25:45.365433  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-205266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-964425 --alsologtostderr -v=5: (1.742419266s)
--- PASS: TestMountStart/serial/DeleteFirst (1.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-966649 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-966649
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-966649: (1.290016017s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.31s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-966649
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-966649: (7.309882724s)
--- PASS: TestMountStart/serial/RestartStopped (8.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-966649 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (105.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-767453 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-767453 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m45.189371985s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-767453 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (105.75s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-767453 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-767453 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-767453 -- rollout status deployment/busybox: (3.268332892s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-767453 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-767453 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-767453 -- exec busybox-7b57f96db7-9bdmk -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-767453 -- exec busybox-7b57f96db7-sgwgd -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-767453 -- exec busybox-7b57f96db7-9bdmk -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-767453 -- exec busybox-7b57f96db7-sgwgd -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-767453 -- exec busybox-7b57f96db7-9bdmk -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-767453 -- exec busybox-7b57f96db7-sgwgd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.04s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-767453 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-767453 -- exec busybox-7b57f96db7-9bdmk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-767453 -- exec busybox-7b57f96db7-9bdmk -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-767453 -- exec busybox-7b57f96db7-sgwgd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-767453 -- exec busybox-7b57f96db7-sgwgd -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (57.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-767453 -v=5 --alsologtostderr
E1206 11:28:41.814006  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-767453 -v=5 --alsologtostderr: (56.869746318s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-767453 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (57.62s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-767453 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.74s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-767453 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-767453 cp testdata/cp-test.txt multinode-767453:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-767453 ssh -n multinode-767453 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-767453 cp multinode-767453:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2246026660/001/cp-test_multinode-767453.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-767453 ssh -n multinode-767453 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-767453 cp multinode-767453:/home/docker/cp-test.txt multinode-767453-m02:/home/docker/cp-test_multinode-767453_multinode-767453-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-767453 ssh -n multinode-767453 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-767453 ssh -n multinode-767453-m02 "sudo cat /home/docker/cp-test_multinode-767453_multinode-767453-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-767453 cp multinode-767453:/home/docker/cp-test.txt multinode-767453-m03:/home/docker/cp-test_multinode-767453_multinode-767453-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-767453 ssh -n multinode-767453 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-767453 ssh -n multinode-767453-m03 "sudo cat /home/docker/cp-test_multinode-767453_multinode-767453-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-767453 cp testdata/cp-test.txt multinode-767453-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-767453 ssh -n multinode-767453-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-767453 cp multinode-767453-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2246026660/001/cp-test_multinode-767453-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-767453 ssh -n multinode-767453-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-767453 cp multinode-767453-m02:/home/docker/cp-test.txt multinode-767453:/home/docker/cp-test_multinode-767453-m02_multinode-767453.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-767453 ssh -n multinode-767453-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-767453 ssh -n multinode-767453 "sudo cat /home/docker/cp-test_multinode-767453-m02_multinode-767453.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-767453 cp multinode-767453-m02:/home/docker/cp-test.txt multinode-767453-m03:/home/docker/cp-test_multinode-767453-m02_multinode-767453-m03.txt
E1206 11:28:55.065396  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-767453 ssh -n multinode-767453-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-767453 ssh -n multinode-767453-m03 "sudo cat /home/docker/cp-test_multinode-767453-m02_multinode-767453-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-767453 cp testdata/cp-test.txt multinode-767453-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-767453 ssh -n multinode-767453-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-767453 cp multinode-767453-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2246026660/001/cp-test_multinode-767453-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-767453 ssh -n multinode-767453-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-767453 cp multinode-767453-m03:/home/docker/cp-test.txt multinode-767453:/home/docker/cp-test_multinode-767453-m03_multinode-767453.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-767453 ssh -n multinode-767453-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-767453 ssh -n multinode-767453 "sudo cat /home/docker/cp-test_multinode-767453-m03_multinode-767453.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-767453 cp multinode-767453-m03:/home/docker/cp-test.txt multinode-767453-m02:/home/docker/cp-test_multinode-767453-m03_multinode-767453-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-767453 ssh -n multinode-767453-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-767453 ssh -n multinode-767453-m02 "sudo cat /home/docker/cp-test_multinode-767453-m03_multinode-767453-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.80s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-767453 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-767453 node stop m03: (1.349089698s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-767453 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-767453 status: exit status 7 (591.155741ms)

                                                
                                                
-- stdout --
	multinode-767453
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-767453-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-767453-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-767453 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-767453 status --alsologtostderr: exit status 7 (565.50462ms)

                                                
                                                
-- stdout --
	multinode-767453
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-767453-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-767453-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 11:29:01.311657  503684 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:29:01.311858  503684 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:29:01.311891  503684 out.go:374] Setting ErrFile to fd 2...
	I1206 11:29:01.311915  503684 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:29:01.312193  503684 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 11:29:01.312428  503684 out.go:368] Setting JSON to false
	I1206 11:29:01.312499  503684 mustload.go:66] Loading cluster: multinode-767453
	I1206 11:29:01.312580  503684 notify.go:221] Checking for updates...
	I1206 11:29:01.313979  503684 config.go:182] Loaded profile config "multinode-767453": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 11:29:01.314196  503684 status.go:174] checking status of multinode-767453 ...
	I1206 11:29:01.315718  503684 cli_runner.go:164] Run: docker container inspect multinode-767453 --format={{.State.Status}}
	I1206 11:29:01.335311  503684 status.go:371] multinode-767453 host status = "Running" (err=<nil>)
	I1206 11:29:01.335337  503684 host.go:66] Checking if "multinode-767453" exists ...
	I1206 11:29:01.335728  503684 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-767453
	I1206 11:29:01.359676  503684 host.go:66] Checking if "multinode-767453" exists ...
	I1206 11:29:01.359988  503684 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 11:29:01.360050  503684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-767453
	I1206 11:29:01.382084  503684 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33283 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/multinode-767453/id_rsa Username:docker}
	I1206 11:29:01.489241  503684 ssh_runner.go:195] Run: systemctl --version
	I1206 11:29:01.496126  503684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 11:29:01.509537  503684 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1206 11:29:01.587147  503684 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-06 11:29:01.576990718 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1206 11:29:01.587750  503684 kubeconfig.go:125] found "multinode-767453" server: "https://192.168.67.2:8443"
	I1206 11:29:01.587787  503684 api_server.go:166] Checking apiserver status ...
	I1206 11:29:01.587840  503684 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1206 11:29:01.600215  503684 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1248/cgroup
	I1206 11:29:01.609014  503684 api_server.go:182] apiserver freezer: "5:freezer:/docker/eb91a017a1ef4f3c1d878ff376794f5fc05d711180ce17a3e2b4a64bd022ac51/crio/crio-d431b77e5cb4720ecc75d64e8c8723a9ea2478aa408aea1bace8896a975d6b53"
	I1206 11:29:01.609096  503684 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/eb91a017a1ef4f3c1d878ff376794f5fc05d711180ce17a3e2b4a64bd022ac51/crio/crio-d431b77e5cb4720ecc75d64e8c8723a9ea2478aa408aea1bace8896a975d6b53/freezer.state
	I1206 11:29:01.617337  503684 api_server.go:204] freezer state: "THAWED"
	I1206 11:29:01.617374  503684 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1206 11:29:01.625914  503684 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1206 11:29:01.625942  503684 status.go:463] multinode-767453 apiserver status = Running (err=<nil>)
	I1206 11:29:01.625952  503684 status.go:176] multinode-767453 status: &{Name:multinode-767453 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 11:29:01.625970  503684 status.go:174] checking status of multinode-767453-m02 ...
	I1206 11:29:01.626294  503684 cli_runner.go:164] Run: docker container inspect multinode-767453-m02 --format={{.State.Status}}
	I1206 11:29:01.645570  503684 status.go:371] multinode-767453-m02 host status = "Running" (err=<nil>)
	I1206 11:29:01.645598  503684 host.go:66] Checking if "multinode-767453-m02" exists ...
	I1206 11:29:01.645928  503684 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-767453-m02
	I1206 11:29:01.664060  503684 host.go:66] Checking if "multinode-767453-m02" exists ...
	I1206 11:29:01.665234  503684 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1206 11:29:01.665290  503684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-767453-m02
	I1206 11:29:01.683792  503684 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33288 SSHKeyPath:/home/jenkins/minikube-integration/22047-362985/.minikube/machines/multinode-767453-m02/id_rsa Username:docker}
	I1206 11:29:01.788747  503684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1206 11:29:01.801823  503684 status.go:176] multinode-767453-m02 status: &{Name:multinode-767453-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1206 11:29:01.801858  503684 status.go:174] checking status of multinode-767453-m03 ...
	I1206 11:29:01.802207  503684 cli_runner.go:164] Run: docker container inspect multinode-767453-m03 --format={{.State.Status}}
	I1206 11:29:01.819506  503684 status.go:371] multinode-767453-m03 host status = "Stopped" (err=<nil>)
	I1206 11:29:01.819529  503684 status.go:384] host is not running, skipping remaining checks
	I1206 11:29:01.819542  503684 status.go:176] multinode-767453-m03 status: &{Name:multinode-767453-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.51s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-767453 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-767453 node start m03 -v=5 --alsologtostderr: (7.618083883s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-767453 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.46s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (76.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-767453
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-767453
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-767453: (25.148673659s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-767453 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-767453 --wait=true -v=5 --alsologtostderr: (50.998593168s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-767453
--- PASS: TestMultiNode/serial/RestartKeepsNodes (76.28s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-767453 node delete m03
E1206 11:30:28.439675  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-205266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-767453 node delete m03: (5.083213541s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-767453 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.81s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-767453 stop
E1206 11:30:45.364673  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-205266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-767453 stop: (23.883150259s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-767453 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-767453 status: exit status 7 (100.214129ms)

                                                
                                                
-- stdout --
	multinode-767453
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-767453-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-767453 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-767453 status --alsologtostderr: exit status 7 (94.516093ms)

                                                
                                                
-- stdout --
	multinode-767453
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-767453-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1206 11:30:56.417442  511517 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:30:56.417557  511517 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:30:56.417593  511517 out.go:374] Setting ErrFile to fd 2...
	I1206 11:30:56.417608  511517 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:30:56.417864  511517 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 11:30:56.418059  511517 out.go:368] Setting JSON to false
	I1206 11:30:56.418094  511517 mustload.go:66] Loading cluster: multinode-767453
	I1206 11:30:56.418197  511517 notify.go:221] Checking for updates...
	I1206 11:30:56.418596  511517 config.go:182] Loaded profile config "multinode-767453": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 11:30:56.418645  511517 status.go:174] checking status of multinode-767453 ...
	I1206 11:30:56.419205  511517 cli_runner.go:164] Run: docker container inspect multinode-767453 --format={{.State.Status}}
	I1206 11:30:56.438225  511517 status.go:371] multinode-767453 host status = "Stopped" (err=<nil>)
	I1206 11:30:56.438295  511517 status.go:384] host is not running, skipping remaining checks
	I1206 11:30:56.438303  511517 status.go:176] multinode-767453 status: &{Name:multinode-767453 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1206 11:30:56.438333  511517 status.go:174] checking status of multinode-767453-m02 ...
	I1206 11:30:56.438648  511517 cli_runner.go:164] Run: docker container inspect multinode-767453-m02 --format={{.State.Status}}
	I1206 11:30:56.462722  511517 status.go:371] multinode-767453-m02 host status = "Stopped" (err=<nil>)
	I1206 11:30:56.462742  511517 status.go:384] host is not running, skipping remaining checks
	I1206 11:30:56.462803  511517 status.go:176] multinode-767453-m02 status: &{Name:multinode-767453-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.08s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (49.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-767453 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-767453 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (48.461006674s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-767453 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (49.32s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-767453
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-767453-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-767453-m02 --driver=docker  --container-runtime=crio: exit status 14 (98.251865ms)

                                                
                                                
-- stdout --
	* [multinode-767453-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-362985/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-362985/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-767453-m02' is duplicated with machine name 'multinode-767453-m02' in profile 'multinode-767453'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-767453-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-767453-m03 --driver=docker  --container-runtime=crio: (33.889920041s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-767453
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-767453: exit status 80 (410.239103ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-767453 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-767453-m03 already exists in multinode-767453-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-767453-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-767453-m03: (2.090299136s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.54s)

                                                
                                    
x
+
TestPreload (123.22s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-983448 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
preload_test.go:41: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-983448 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (1m2.368844022s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-983448 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 -p test-preload-983448 image pull gcr.io/k8s-minikube/busybox: (2.401689677s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-983448
preload_test.go:55: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-983448: (5.970721726s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-983448 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1206 11:33:41.814238  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:33:55.064960  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-983448 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (49.810161824s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-983448 image list
helpers_test.go:175: Cleaning up "test-preload-983448" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-983448
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-983448: (2.407592868s)
--- PASS: TestPreload (123.22s)

                                                
                                    
x
+
TestScheduledStopUnix (109.27s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-783574 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-783574 --memory=3072 --driver=docker  --container-runtime=crio: (33.153129959s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-783574 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1206 11:35:03.119140  525738 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:35:03.119330  525738 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:35:03.119344  525738 out.go:374] Setting ErrFile to fd 2...
	I1206 11:35:03.119351  525738 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:35:03.119670  525738 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 11:35:03.119961  525738 out.go:368] Setting JSON to false
	I1206 11:35:03.120126  525738 mustload.go:66] Loading cluster: scheduled-stop-783574
	I1206 11:35:03.120528  525738 config.go:182] Loaded profile config "scheduled-stop-783574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 11:35:03.120611  525738 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/scheduled-stop-783574/config.json ...
	I1206 11:35:03.120822  525738 mustload.go:66] Loading cluster: scheduled-stop-783574
	I1206 11:35:03.120986  525738 config.go:182] Loaded profile config "scheduled-stop-783574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-783574 -n scheduled-stop-783574
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-783574 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1206 11:35:03.613381  525826 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:35:03.614200  525826 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:35:03.614217  525826 out.go:374] Setting ErrFile to fd 2...
	I1206 11:35:03.614225  525826 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:35:03.614523  525826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 11:35:03.614871  525826 out.go:368] Setting JSON to false
	I1206 11:35:03.615111  525826 daemonize_unix.go:73] killing process 525754 as it is an old scheduled stop
	I1206 11:35:03.615220  525826 mustload.go:66] Loading cluster: scheduled-stop-783574
	I1206 11:35:03.615702  525826 config.go:182] Loaded profile config "scheduled-stop-783574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 11:35:03.615838  525826 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/scheduled-stop-783574/config.json ...
	I1206 11:35:03.616100  525826 mustload.go:66] Loading cluster: scheduled-stop-783574
	I1206 11:35:03.616270  525826 config.go:182] Loaded profile config "scheduled-stop-783574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1206 11:35:03.624356  364855 retry.go:31] will retry after 140.733µs: open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/scheduled-stop-783574/pid: no such file or directory
I1206 11:35:03.624960  364855 retry.go:31] will retry after 171.033µs: open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/scheduled-stop-783574/pid: no such file or directory
I1206 11:35:03.626334  364855 retry.go:31] will retry after 305.595µs: open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/scheduled-stop-783574/pid: no such file or directory
I1206 11:35:03.627542  364855 retry.go:31] will retry after 477.916µs: open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/scheduled-stop-783574/pid: no such file or directory
I1206 11:35:03.628723  364855 retry.go:31] will retry after 686.671µs: open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/scheduled-stop-783574/pid: no such file or directory
I1206 11:35:03.629873  364855 retry.go:31] will retry after 800.769µs: open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/scheduled-stop-783574/pid: no such file or directory
I1206 11:35:03.631030  364855 retry.go:31] will retry after 1.61938ms: open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/scheduled-stop-783574/pid: no such file or directory
I1206 11:35:03.633301  364855 retry.go:31] will retry after 1.959496ms: open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/scheduled-stop-783574/pid: no such file or directory
I1206 11:35:03.635551  364855 retry.go:31] will retry after 3.228407ms: open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/scheduled-stop-783574/pid: no such file or directory
I1206 11:35:03.639783  364855 retry.go:31] will retry after 4.028806ms: open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/scheduled-stop-783574/pid: no such file or directory
I1206 11:35:03.643993  364855 retry.go:31] will retry after 8.321466ms: open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/scheduled-stop-783574/pid: no such file or directory
I1206 11:35:03.653260  364855 retry.go:31] will retry after 4.469017ms: open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/scheduled-stop-783574/pid: no such file or directory
I1206 11:35:03.658688  364855 retry.go:31] will retry after 13.237909ms: open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/scheduled-stop-783574/pid: no such file or directory
I1206 11:35:03.672946  364855 retry.go:31] will retry after 12.233758ms: open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/scheduled-stop-783574/pid: no such file or directory
I1206 11:35:03.686191  364855 retry.go:31] will retry after 28.366997ms: open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/scheduled-stop-783574/pid: no such file or directory
I1206 11:35:03.715465  364855 retry.go:31] will retry after 42.220901ms: open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/scheduled-stop-783574/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-783574 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-783574 -n scheduled-stop-783574
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-783574
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-783574 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1206 11:35:29.600306  526192 out.go:360] Setting OutFile to fd 1 ...
	I1206 11:35:29.600532  526192 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:35:29.600564  526192 out.go:374] Setting ErrFile to fd 2...
	I1206 11:35:29.600593  526192 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1206 11:35:29.600924  526192 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22047-362985/.minikube/bin
	I1206 11:35:29.601260  526192 out.go:368] Setting JSON to false
	I1206 11:35:29.601414  526192 mustload.go:66] Loading cluster: scheduled-stop-783574
	I1206 11:35:29.601856  526192 config.go:182] Loaded profile config "scheduled-stop-783574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1206 11:35:29.602029  526192 profile.go:143] Saving config to /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/scheduled-stop-783574/config.json ...
	I1206 11:35:29.602298  526192 mustload.go:66] Loading cluster: scheduled-stop-783574
	I1206 11:35:29.602482  526192 config.go:182] Loaded profile config "scheduled-stop-783574": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1206 11:35:45.365340  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-205266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-783574
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-783574: exit status 7 (71.98579ms)

                                                
                                                
-- stdout --
	scheduled-stop-783574
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-783574 -n scheduled-stop-783574
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-783574 -n scheduled-stop-783574: exit status 7 (69.331125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-783574" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-783574
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-783574: (4.423128632s)
--- PASS: TestScheduledStopUnix (109.27s)

                                                
                                    
x
+
TestInsufficientStorage (12.73s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-557235 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-557235 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.071119591s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7ae75236-566d-4dd3-8e02-4fde81f6ae2e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-557235] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"02c14bc9-7311-4f94-bfc6-e14690d65211","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22047"}}
	{"specversion":"1.0","id":"aad7123e-eaca-469b-96b1-f8d7f99f9800","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"dd7e1fe3-54ca-4a23-a731-47783dc849f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22047-362985/kubeconfig"}}
	{"specversion":"1.0","id":"f10a3f45-6361-4598-844d-6aa1fd263f75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-362985/.minikube"}}
	{"specversion":"1.0","id":"0cb0b5cf-a62b-4517-aae2-6d3b2bf86195","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"76955075-199d-4cb6-ac6e-566e5428949f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6497ec1f-5d4b-46dd-872e-8a933c83e67a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"cc5f3419-7fb8-4999-8af7-0c5fb15d6c69","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"bd09f2cb-8363-4953-be9e-a1023e47bc35","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"6d5e917a-b9bd-44a2-8fde-94537b4f260a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"4c06f6d3-0a52-4d5a-a6d3-53958264c946","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-557235\" primary control-plane node in \"insufficient-storage-557235\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"7adf7f56-cdd5-4477-bd54-6a9bae88927f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1764843390-22032 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"c448dbfe-2821-469f-a5a5-a7c5c2daab61","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"6f2b7662-0108-4fee-8a74-182c445be7fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-557235 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-557235 --output=json --layout=cluster: exit status 7 (318.579275ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-557235","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-557235","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 11:36:29.549800  527910 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-557235" does not appear in /home/jenkins/minikube-integration/22047-362985/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-557235 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-557235 --output=json --layout=cluster: exit status 7 (313.352933ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-557235","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-557235","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1206 11:36:29.862719  527977 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-557235" does not appear in /home/jenkins/minikube-integration/22047-362985/kubeconfig
	E1206 11:36:29.873196  527977 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/insufficient-storage-557235/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-557235" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-557235
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-557235: (2.024332913s)
--- PASS: TestInsufficientStorage (12.73s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (300.21s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.987906088 start -p running-upgrade-141321 --memory=3072 --vm-driver=docker  --container-runtime=crio
E1206 11:43:55.065110  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.987906088 start -p running-upgrade-141321 --memory=3072 --vm-driver=docker  --container-runtime=crio: (33.261864982s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-141321 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1206 11:45:45.364350  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-205266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:47:08.442646  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-205266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-141321 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m24.021087122s)
helpers_test.go:175: Cleaning up "running-upgrade-141321" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-141321
E1206 11:48:41.813803  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-141321: (2.007194756s)
--- PASS: TestRunningBinaryUpgrade (300.21s)

                                                
                                    
x
+
TestMissingContainerUpgrade (119.47s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.4134273789 start -p missing-upgrade-887720 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.4134273789 start -p missing-upgrade-887720 --memory=3072 --driver=docker  --container-runtime=crio: (1m2.774019983s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-887720
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-887720
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-887720 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-887720 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (52.420591358s)
helpers_test.go:175: Cleaning up "missing-upgrade-887720" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-887720
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-887720: (2.30962631s)
--- PASS: TestMissingContainerUpgrade (119.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-365903 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-365903 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (111.122856ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-365903] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22047
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22047-362985/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22047-362985/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (44.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-365903 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-365903 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (43.59844159s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-365903 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (44.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (27.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-365903 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-365903 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (25.552156095s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-365903 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-365903 status -o json: exit status 2 (322.9351ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-365903","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-365903
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-365903: (2.067969375s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (27.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-365903 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-365903 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (8.379005141s)
--- PASS: TestNoKubernetes/serial/Start (8.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22047-362985/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-365903 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-365903 "sudo systemctl is-active --quiet service kubelet": exit status 1 (279.16278ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-365903
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-365903: (1.296642782s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-365903 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-365903 --driver=docker  --container-runtime=crio: (7.786725498s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-365903 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-365903 "sudo systemctl is-active --quiet service kubelet": exit status 1 (401.215998ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.06s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.06s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (307.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.872440994 start -p stopped-upgrade-130351 --memory=3072 --vm-driver=docker  --container-runtime=crio
E1206 11:38:41.814123  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.872440994 start -p stopped-upgrade-130351 --memory=3072 --vm-driver=docker  --container-runtime=crio: (38.573971573s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.872440994 -p stopped-upgrade-130351 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.872440994 -p stopped-upgrade-130351 stop: (1.242722696s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-130351 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1206 11:40:04.890365  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/addons-545880/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:40:45.364991  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-205266/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1206 11:41:58.141894  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-130351 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m27.181504757s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (307.01s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-130351
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-130351: (1.785141311s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.79s)

                                                
                                    
x
+
TestPause/serial/Start (83.16s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-508007 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1206 11:48:55.065260  364855 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22047-362985/.minikube/profiles/functional-196950/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-508007 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m23.162286705s)
--- PASS: TestPause/serial/Start (83.16s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (30.34s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-508007 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-508007 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (30.31539293s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (30.34s)

                                                
                                    

Test skip (36/316)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0.44
31 TestOffline 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
112 TestFunctional/parallel/MySQL 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
132 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
261 TestGvisorAddon 0
283 TestImageBuild 0
284 TestISOImage 0
348 TestChangeNoneUser 0
351 TestScheduledStopWindows 0
353 TestSkaffold 0
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.44s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-566856 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-566856" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-566856
--- SKIP: TestDownloadOnlyKic (0.44s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
Copied to clipboard